Well, it’s finally here - a complete rewrite of the FreeNAS middleware and UI with the latest version of FreeBSD bring things up to modern standards with nice things like USB 3 support for those of us building smaller home systems.
Lots of new goodies that I haven’t yet had the time to do a deep dive into like the addition of a proper hypervisor so that you can run virtual machines directly on a FreeNAS box as well as Docker support to replace the older BSD jails method of packaging applications without the overhead of a full virtual machine.
Virtual Machines vs Docker
For the moment, the virtual machine option isn’t going to be terribly useful to me since generally I’ll be using VMware ESXi for my VMs and relying on FreeNAS to provide the underlying storage, whether via NFS or iSCSI (but almost always on NFS). I can see that this could be useful for a lot of people that would like to be able to take advantage of Linux applications and stuff that don’t have current equivalents in the BSD world.
The Docker side of the house is considerably more interesting to me since this may be a quick shortcut to having a Docker-enabled system in the home lab without having to build it out myself. That said, I have to dig in a little deeper to see how to add additional registries and stuff since it currently points to a restricted list from the FreeNAS registry. But definitely something to look into.
The first thing that I ran into as an issue is the fact that for some reason, the UI doesn’t seem to behave 100% when using Safari. The quick example is that things like activating services like NFS via Safari doesn’t work. Click the button to enable it and then save doesn’t work. However, switching to Chrome fixes this directly and the same actions result in the service being correctly started. I’m a little disappointed that Safari isn’t a first class client browser, but at least there are reasonable alternatives.
This would seem to be more of an underlying BSD issue but I have a bunch of multi-disk USB 3 boxes that I like to use for quick testing and for big data movements where I can take advantage of the ZFS mirroring with checksums and read load balancing across disks. However there is an odd situation that you might run into when using these kinds of boxes or it may be related to the way that disk serial numbers are interpreted in BSD.
My first test bed is an Intel NUC attached to an Akitio NT2 with two 4 Tb Seagate 2.5” disks. When I go to create a volume (zpool) and look at the disks that are available, it only shows one disk with the curious name of mpath0, rather than the expected BSD naming of da0 and da1 for the two disks. Digging in on the command line, I found that da0 and da1 are both present in /dev, but for some reason I can’t use them to create a zpool, even from the command line. So the hint here is the name mpath0 in the name of the disk that I do see so I’m thinking that there’s some multipath code running that would map the presence of the same disk twice via two different communications paths (either remote iSCSI volumes or dual-ported SAS disks).
Looking at the contents of dmesg, it becomes clear that the gmultipath has determined that my disks da0 and da1 are in fact the same disk! Which is understandeable when you dig into what the box is reporting back to the BSD APIs
[root@freenas] ~# camcontrol devlist <TS32GMTS400 N1126KB> at scbus0 target 0 lun 0 (ada0,pass0) <inXtron, NT2 U3.1 0> at scbus2 target 0 lun 0 (pass1,da0) <inXtron, NT2 U3.1 0> at scbus2 target 0 lun 1 (pass2,da1) [root@freenas] ~# camcontrol inquiry da0 -S 160318E6006C [root@freenas] ~# camcontrol inquiry da1 -S 160318E6006C
So from the BSD perspective, these two disk are the same ones.
So if you are not using hardware that is explicitly designed for multipathing on the back-end, you’ll probably want to disable the multipath kernel module. I’ve tried a number of approaches without success, so I went with the brutal method of simply moving the module out of the /boot/kernel directory:
mv /boot/kernel/geommultipath.ko \~/
After this command and a reboot, it goes back to the old school da0 and da1 disks that are seen as two different devices.
I suspect that this module will probably come back when you run an update so keep an eye open.
And in general be warned that the BSD USB hotplug feature is a little hit and miss. You’re much better off having any USB storage devices you want online connected at boot time in order to properly have them seen by the OS. And despite this, for some reason, the webUI will not always show you all available disks for creating your zpool. But all is not lost - you can create the zpool at the command line and have it recognized by the UI. In fact the websockets part of the UI is startlingly fast in that as soon as I created the pool at the command line, it popped up in real time in the webUI.
One thing that still hasn’t been addressed in the installation process is the ability to select the local keyboard type before entering the new root password. You can change the console keyboard layout in the installation wizard post-install, but it should be an option available during the initial install so that a complex password can be used.
The new UI is a lot prettier than the last version and a little clearer without the confusion between the duplicate names in the top bar and the hierarchical menu on the left. I really like the way that alerts are displayed in the top corner and actions are show in the right column with progress and status, along with some more details when you click on them. I’d like to see something that gives more details on task failures (log information etc.).
I like the fact that I can select a few widgets that will live on the dashboard (IOPS, ARC hits, network bandwidth etc.) so that I have that information immediately upon connecting to the server.
The UPS integration seems to have gotten a serious overhaul and exposed in the new UI which makes me happy with an exceedingly complete list of UPS models that can be used, including the EATON models that I use on my home lab. Now to find out just how far I can push things with the possibility of scripts to shut down other devices on the network.