A closer look at Bull Storage Solutions

200px-Group_Bull_logoI thought it was about time that I looked at some of the larger storage vendors closer to home. One of these is of course Bull. This company is probably more familiar to those of us based in Europe rather than those of you based in the Americas or Asia Pacific. However VMware customers in EMEA will have seen them in the Solutions Exchange at VMworld Europe, where they have a reasonably large presence. After some conversation with my good pal Didier Pironet, whom I’ve met at a couple of recent VMUGs, I was introduced to Philippe Reynier who is a manager in the Bull StorWay Competence Center and Solution Center.  Philippe  provided me with a lot of good detail on Bull’s storage solutions which I will share with you here.

My first ask was for an overview of the current Bull storage offerings.

Bull Storage Solutions

Currently Bull offer 4 different model of storage array, the Optima 1600, 2600, 3600 and 4600. The entry level configuration contains up to 96 drives, but this can scale up to a high end configuration of 960 drives. All arrays come with dual controller architectures which includes cache mirroring for redundancy. The protocols supported by the array include both Fiber Channel and iSCSI (over 1Gb and 10Gb interconnects). The arrays support block level protocols only – there is no NFS or SMB support, or FCoE support at this time.

The Optima series are hybrid storage arrays, meaning that they support a combination of both SSD (Solid State Drives) and magnetic disks. The SSDs can serve a number of different purposes depending on customer requirements – they can be used directly as a high performance, low latency LUN, or they can be used as a designating cache layer, or they can be used as part of an auto-tiering solution which includes both SSD and magnetic disks. This auto-tiering approach moves high-demand hot-blocks to the flash layer, while moving low-demand cold data to the slower, less performant magnetic disks.

Features

All arrays support native snapshot functionality and replication. There is a maximum of 16 snapshots per volume/LUN but there is no array wide maximum for snapshots. There is also thin provisioning support.

There are also some nice additional features on the arrays such as “Advanced Phoenix Technology” which is used to anticipate disk failures as well as “Drive spin down” for energy efficiency. However there is no dedupe or compression on the arrays just yet.

The current generation of arrays also provide a Quality of Service feature which is achieved through cache partitioning. Cache partitioning acts as a sort of QoS control, as it limits the amount of cache resource that may be consumed by sets of LUNs. This prevents some applications to consume the entire cache.

BULL - Cache Segments

Both synchronous and asynchronous replication is supported on the arrays.

vSphere integration

From a vSphere integration perspective, there are a number of nice integration points. First the arrays use VMware’s own Native Multipath Plugin (NMP), the ALUA SATP and the MRU PSP. Information about the plugins, as per every supported array, can be found on the VMware Compatibility Guide here. The Round Robin PSP is not supported with the Optima arrays at the time of writing.

The arrays also support the VAAI block primitives – ATS, XCOPY, Write_Same, Thin Provisioning and UNMAP support. This can be verified via the esxcli storage core device vaai status get command run against a Bull Optima storage device from an ESXi host.

From a DR perspective, Bull have an Storage Replication Adapter (SRA) for VMware’s Site Recovery Manager (SRM) to facilitate controlled fail-over between sites in the event of a disaster. As mentioned, the arrays facilitate both synchronous and asynchronous replication.

There is also a vSphere management plugin for the vSphere Web client. This is quite a nice job. On the landing page, there is a Bull Storage icon which can be selected to get you started.

image002

This provides a number of nice features for managing and deploying Bull storage in a vSphere environment. Under the Monitor tab of each virtual machine, details related to the physical storage utilized by a virtual machine’s hard disks are displayed (in the Bull Storage view):

image003

As you can see, this include useful array side information which is always good for management and troubleshooting. They also display some useful physical storage information when provisioning new storage. A list of pools with varying RAID levels is displayed in the provisioning wizard, and administrators can choose to carve out a new LUN and bind it to particular ESXi hosts from the vSphere web client – perfect for someone who manages both the virtual and storage infrastructures:

image004

Update: Bull was acquired by Atos in 2014.