News

REVIEW: Sun OpenSolaris for storage

Sun Microsystems' OpenSolaris has been in the news recently, specifically targeting do-it-yourselfers and storage folks. It got a ton of media coverage and fell right in the middle of my radar. I happened to be looking to take advantage of some hardware I recently got to extend my home-based storage capacity.

This is not going to be a review of the operating system in the traditional sense; I will get a little technical in some areas with performance numbers and such, but I'm not out to prove that it can be done, but rather to determine if Sun's offering is of real value to the storage community. My goal is to create a storage environment that resembles that of many small to mid-sized businesses (SMBs), with approximately 7 TB of total managed storage.

A brief primer on OpenSolaris

OpenSolaris (referred to as Project Indiana, or code-names Nevada) is very similar to Linux in concept. Sun is responsible for the core codebase and the community (Sun employees included). Anyone can access the source code and build their own distribution based on that code. The community is free to modify it to a certain extent and redistribute it with the proper acknowledgements. This differs significantly from Solaris 10, which is free, but not "open" because it is a proprietary, trademarked product of Sun. There are currently five complete distributions of OpenSolaris available, four of which are offered by groups other than Sun. I used the DVD-based OpenSolaris Express Community Edition for this project because the CD-based OpenSolaris distribution does not include some of the storage-specific tools.

There are quite a few free ways to turn a collection of disks into a network-attached storage (NAS) device, most are based on Linux and BSD, and offer polished Web-based administration. This isn't Sun's first contribution to the do-it-yourself storage community; after the acquisition of Cobalt Networks and the subsequent retiring of the Cobalt product line shortly thereafter, Sun released the Cobalt operating system source code to the community with an open license. From that contribution came a NAS storage platform called BlueQuartz, which can be installed on top of BSD, Linux and so forth.

I'm taking the time to point out that there are alternatives to installing and learning OpenSolaris to get access to your disk via the LAN in an article geared toward doing just that, so there must be a compelling reason why I would invest the time and effort. Simply put, the Zettabyte File System (ZFS) is the reason for the effort. ZFS is a feature-rich file system that is very well-suited to the type of workload and management typical for SMB-based storage devices. It mitigates most of the risks of not using a hardware-based RAID controller, and adds an easy facility to make and mange snapshots, as well as support for multi-terabyte volumes. As of this writing you need OpenSolaris, one of the distributions based on it, or Solaris 10 to get access to ZFS at boot time, which in my opinion, is very important to the use of any file system.

Testing specifications

My test bed consists of a Tyan dual-core Opteron GX28 with 4x400 GB SATA drives, an Areca PCI-X SATA RAID controller, 8 GB of RAM and two dual-core Opteron 2.4 GHz processors and 2x Gigabit Ethernet cards. I'm using Solaris Express Community Edition build 87.

I won't go into detail about the installation process, but there are some key points that should be noted. The first hurdle is burning the disk. You must use a full-featured burning application if you plan to burn the ISO from within Windows; simply using PowerISO or the ISO Recorder Powertool will not burn the DVD properly no matter how many times these apps claim to have done so successfully. The disk those applications produce will boot, however, it won't get past the "Grub" prompt and will never present you with the installation boot menu. Save yourself the headache, and either use Linux/Solaris or use Nero, or something similar in Windows. While the installation isn't overly complex, you should be certain that your hardware is on the OpenSolaris hardware compatibility list. I found that out the hard way.

VMware Server may not allow you to install OpenSolaris properly, at least not the builds I've been using; the X setup is broken and takes a good bit of work to fix. It seems that since the advent of Sun's VirtualBox acquisition, support for VMware's workstation-class family of products isn't where it used to be.

As a side note, OpenSolaris installs and runs quite well in VirtualBox without any tweaking required. There is a bug in the version of an integral library that ships with OpenSolaris Express Community Edition; it is a strange bug that will allow you to boot into the OS once after you install it, but after that it will simply not boot into anything usable anymore. I've documented a workaround and step-by-step instructions here.

If you don't have experience with Solaris you will have a pretty steep learning curve, not insurmountable, but steep all the same. The maddening part is the lack of a clearing house for step-by-step instructions to complete various tasks, like the Linux Documentation Project provides. I understand the product and project are young, but it would be nice to see more detailed instruction for the non-Solaris admins, or even a Linux-to-Solaris common task translation cheat sheet.

To find examples and config help I had to have some prior knowledge of what syntax or command was used to achieve the desired results, as well as get very comfortable with the advanced search options on Google. While this may seem like a "duh" kind of statement, consider it in the context of trying to enable external access to the ZFS Web console. While one would think simply editing the config file in the /etc/webconsole directory would do the trick, it doesn't. You need to know about svccfg, and that is the maddening part, especially if they are targeting people who would normally use Linux!

Recommendations

Be aware that at the time of this writing OpenSolaris Developers Edition is a work in progress. I've encountered a few bugs and will tell you that having patience and a good handle on troubleshooting methodology will help you not pull your hair out. For instance, although my Areca card is on the HCL, and Areca has drivers that install rather easily, they create a problem when rebooting into 64-bit mode. Without knowing a little about what the driver was doing or what the installer was doing I would never have figured out that the installer was installing a 64-bit driver in 32-bit mode, which lead to an inability to mount the file system, creating a looping init segfault situation. If you think that is a handful to read, imagine what it would have been like not being able to look into the file system (rescue mode or the live CDs don't like external drivers) to ferret out what happened and why.

However, the community is active and growing, and more people are documenting their travails with the product. Over time, this will make it easier to figure out specifically what problem you may be having. In no way should you consider this all bad. I've had similarly maddening issues with Linux over the years. It is simply part of the product's growth process.

It very well may behoove you to ditch RAID controllers (which I ended up doing in order to finish this article in a timely fashion), and simply use the onboard SATA/SAS controllers. Or, you may need to carefully choose a controller that has drivers native in the OpenSolaris kernel. The storage subsystem proved to be the most fickle during my install processes. The performance of the operating system in general is as would be expected from Solaris.

The OS itself is responsive and seems to put memory and CPU resources to good use, but it also had quite bit of hardware under it (two dual-core Opterons and 4 GB RAM). Its installer wants at least 768 MB of RAM to do a GUI install, but to be fair the OpenSolaris team never made any promises pertaining to this being a "lightweight" OS of any kind. If you plan to have this thing do heavy storage lifting you'd probably want at least a dual-core CPU under the hood. I've tested the server under load from Windows Vista, a Windows Server 2003, Windows Server 2008, and Linux (Debian 4.0) machines all requesting reads and writes at roughly the same time, and independently, (noticeably absent from that list is VMware ESX and Windows Hyper-V; I'll get to those in a follow-up article). All but the Windows Vista client had a Gigabit Ethernet dedicated to iSCSI.

Based on the unscientific testing I've done so far, the limiting factor in performance is networking. The CPU/RAM counters on the server itself didn't break a sweat during all my tests, with CPU use on average below 20% for the duration of my testing sessions. The OpenSolaris server is attached to two Gigabit Ethernet ports, with a single Gigabit Ethernet port dedicated to iSCSI traffic, and I got roughly 60 MBps on a single machine writing data to the server; read speeds were similar, and those speeds did go down when more servers were brought online.

With all four clients attacking one network port, my speeds dipped to around 40 MBps. I definitely need a higher density of Gigabit Ethernet ports so I can couple multiple ports together (bonding or EtherChannel depending on your brand alliance) to get consistent performance under higher load. My intention for this server is to store my Windows Media Center and Mythos recorded TV show clips, and so far 60 MBps has been able to handle saving my TV shows directly to my OpenSolaris iSCSI "SAN." I lost very few frames when encoding directly to the iSCSI-attached shares. Predictably, I lost more frames while using CIFS due to the additional overhead it has. I tested the fault tolerance of ZFS as well by yanking out a hard drive and ZFS did its magic, repeatedly throwing alarms. But even while complaining, my storage was still available, all without a RAID card.

I think Sun has done the right thing in releasing OpenSolaris to the public this way. However, it is a little young and it needs some time to mature before you can trust it for mission-critical tasks like Linux. I like the power and flexibility OpenSolaris offers; I also like the security model it is built on and the years of experience Sun has with this sort of thing. However, the package manager isn't the best or easiest, and the configuration interface is a little hackneyed. So, I installed Webmin to handle remote administration chores. As far as the storage-related aspect of this operating system goes, it's a home run. It offers iSNS, iSCSI, CIFS, NFS (Sun did help develop NFS you know!) and, most of all, ZFS.

ZFS takes quite a bit of worry out of my day. I don't have to worry about purchasing expensive RAID controllers. I can use the SATA ports on my motherboard and get great redundancy and fault tolerance. I can take snapshots, and revert from snapshots via a browser interface. I can create ZFS volumes and manipulate those volumes via the same browser-based interface. Couple all that with the ability to segregate my server into logical partitions like IBM's LPARS, and you have some serious potential. Once this thing is set up, I don't foresee much care and feeding outside of critical security patches.

Cost comparisons

Taking a step back and looking at the cost of putting this thing together, approximately $1,100 for 1.1 TB usable, $10 a terabyte isn't that bad comparatively speaking. If you have access to a mid-level SMB admin with a little Solaris experience (again provided your hardware is on the approved HCL) and you may just have a compelling solution from an acquisition and operational cost standpoint. If you went with a new chassis like Sun's "Thumper" or something similar like Nexsan Technology's SATABeast, I can see how some of the entry-level array vendors like Promise Technology will have an option to the de facto Linux kernel on their arrays in six months to a year. OpenSolaris will need to grow up a little before it really becomes a serious threat outside of the dedicated storage geek purview. I firmly believe it will get there, however, it needs to move a little more quickly on the remote administration piece (integrate Webmin).

OpenSolaris lacks the polish that Linux lacked early in its life and very much reminds me of trying to get Debian 2.0 (Hamm) running on my PII 400 MHz in 1998. My bottom line, keep a close eye on the development of this product, once it matures a bit more it will save you some cash, enough cash to make getting some paid support every once in a while not such a terrible thing. If you have previous experience with Solaris either on Sparc or X86 then you will definitely find value in this initiative. If time is your enemy, and you have a mostly Linux- or Windows-based skill set you would be better served waiting until they've cut their teeth and integrated more of the system admin tools into the Java Web Console, or Linux integrates ZFS into itself better.

This story first appeared at searchsmbstorage.com