I’m not going to repeat all the configuration steps here. If you want to see how to deploy the vSphere Docker Volume Service VIB via VUM, or setup docker on Photon OS, check out the Minio post referenced above. The steps will be the same. Instead, I’ll focus on how to create the volumes on vSAN, how to consume those volumes when launching Scality S3server, and then creating and using buckets.
1. Create docker volumes for Scality Data and MetaData
We start by creating two volumes for Scality, one for metadata, and one for data. I picked some random sizes again:
# docker volume create –driver=vsphere –name=S3Data -o size=100gb
# docker volume create –driver=vsphere –name=S3MetaData -o size=10gb
2. Verify that the volumes were created
In the previous commands, no policy was specified, so the volumes (VMDKs) should be created with the default RAID-1 policy. We can verify this by examining the Physical Disk Placement on the container host (Photon OS VM). Here we see that it is indeed a RAID-1 configuration for the new volumes, with a witness component for quorum.
And in fact, if we examine the vSAN datastore, we can see all of the volumes residing in the dockvols folder:
Again, if you want to create volumes with policies other than the vSAN default, such as a RAID-5 rather than a RAID-1, the steps on how to do this are also in the Minio post.
3. Launch Scality S3 Server and consume the volumes
We can now go ahead and run the Scality application, specifying our new volumes at the docker command line:
root@photon-machine [ ~ ]# docker run -p 8000:8000 –name s3 -v “S3Data@vsanDatastore:/usr/src/app/localData” -v “S3MetaData:/usr/src/app/localMetadata” scality/s3server
4. Access the Scality S3 Server buckets
Scality S3 server is now up and running. The endpoint for Scality S3 server is the container host (Photon OS) IP address, and port 8000. The default access Key is accessKey1 and the default secret key for Scality are verySecretKey1. You can once again use something like an S3 Browser from NetSDK LLC to create, upload or read from the S3 buckets, as shown in the previous post. There are many other S3 clients, so use whichever you prefer. After adding the location and access key/secret key, you should be able to create buckets, and upload files/folders.
Now if you stop and start the Scality application, but use the same volumes in the docker command line, your buckets and their contents are persisted and available.
Scality, deployed via docker, is another option for those of you who are looking for an S3 object store running on vSAN, utilizing Project Hatchway to create persistent container volumes.