These are notes on how I setup SMB and AFP shares on FreeNAS 11. The key to successful use of both in an Apple environment is to use AFP only for Time Machine backups and to use SMB for everything else. I was unable to make the two protocols coexist under any other configuration 1. This is really not so bad, as Apple has been migrating towards the exclusive use of SMB for some time. These notes describe how to:
- Create public SMB shares accessible by anyone on the network.
- Configure AFP to support Time Machine backups.
- Create individual users, each with their own
homedirectory on the server, accessible only by them over SMB.
- Configure SMB and AFP to use different IP addresses.
- Create a user for each device that will create TimeMachine backups on the server.
These notes are largely based on this great writeup.
I'm using a Dell R510 with eight LFF bays (aka "Mack8"). Mine came with a Dell H700 PERC RAID controller. One of the reasons I chose to use FreeNAS is that it supports the ZFS filesystem. For ZFS to perform properly, it must have direct access to the disk drives. Unfortunately, the H700 only supports RAID configurations -- i.e., it does not offer JBOD / disk passthrough to the operating system. I had to replace the H700 with a HBA that will passthrough the drives.
Survey of Disk Controllers
I made the following notes while wading through all of the possible controllers that could be used with ZFS.
The Dell PERC 5/i is an old 3GB/s SAS/SATA RAID (0, 1, 5, 10, 50) controller with battery backup. It will not pass-through raw disks (i.e. JBOD) to the OS for use by ZFS, etc. Limited to 2TB drives or smaller.
The Dell PERC 6/i is an old 6GB/s SAS/SATA RAID (0, 1, 5, 6, 10, 50, 60) controller with battery backup. It will not pass-through raw disks (i.e. JBOD) to the OS for use by ZFS, etc. Limited to 2TB drives or smaller.
The Dell H700 is a newer 6GB/s SAS/SATA RAID (0, 1, 5, 6, 10, 50, 60) controller with battery backup. It has two x4 SAS ports. It will not pass-through raw disks (i.e. JBOD) to the OS for use by ZFS, etc. It's a great controller if you aren't using ZFS. The H800 is like the H700, except that it has external SATA ports and is intended to be used with a drive expansion unit. It can have either 512MB or 1024MB of DDR cache. I believe it is limited to 6TB drives or smaller.
The Dell H200 is a newer 6GB/s SAS/SATA RAID (0, 1, 10) controller without battery backup. It has 2x4 mini-SAS ports. If disks are present but not configured in the H200, it will pass the raw devices through to the OS. It uses the LSI SAS 2008 chipset and is apparently quite similar to the popular LSI 9211-8i HBAs (and the IBM M1015 also) and can be reflashed with LSI firmware. Conflicting information appears to say the maximum supported drive size is 2TB with Dell firmware. Reflashing to LSI firmware removes any possible limitations.
The Dell SAS6/iR is the lowest-end RAID (0,1) controller at 3GB/s. I haven't researched it carefully, but it may be the same board as the H200, but with SATA connectors instead of Mini-SAS. It appears that it can be reflashed in the same way as the H200, but I have not researched this.
The LSI 9240-8i HBA is an extremely popular low-end RAID controller which is supported by almost all operating systems. It requires a PCI-e x8 slot. It has two x4 Mini-SAS SFF-8087 connectors and supports 6GB/s SAS/SATA. The popular IBM M1015 is a relabeled version of this card. With default firmware, the 9240 does not pass-through raw drives to the OS. However, it can be reflashed with firmware for the LSI 9211 HBA, which does allow raw disk pass-through (such pass-through is also called 'IT mode').
SFF-8087 connectors are termed 'Mini-SAS' connectors and SFF-8484 are terms 'SAS' connectors. The number of pins is the same, but the SFF-8087 is much more compact.
Using a H200 in the R510-8
The R510 has an "internal" PCI slot referred to as the "storage" slot. The H700 occupied that slot. If possible, I wanted to use a storage controller that would function in that slot. I chose a Dell H200.
According to Dell, my R510 shipped with a SAS6 controller. By the time the R510 fell into my possession, an H700 had been installed in it.
The SAS6 controller has two SFF-8484 SATA connectors. The H200 and the H700 both have two SFF-8087 SAS connectors. Had the original SAS6 controller still been in the server, new cables would have been required. However when the H700 was installed in my R510, the cables were changed at that time. In other words, I lucked out and could use the existing cables for the new H200 controller.
One other comment about cables. The original Dell cables that would have been used with a H700 controller in the R510 are just long enough to reach the storage slot -- they are not long enough to reach a H200/H700 that has been relocated to one of the other PCI slots. The cables in my R510/H700 were not original Dell cables -- they were, amazingly, long enough to allow me to use the H200 in the rear PCI slot.
Cable comment number three: I believe the correct Dell part numbers for the H200/H700 in the storage slot are Y673P and P745P. When you look at those, you'll see that they use right-angle connectors. You'll also see that they run between $40-$95 on eBay. My cables look something like this. The key point being that straight SFF-8087 connectors work just fine.
Reflashing the Dell H200 to IT Mode
Out of the box, the H200 acts like an H700 -- i.e., it does not pass through drives to the OS. However, the H200 is based on the LSI 9211-8i chipset, so it can be reflashed with LSI firmware, which does allow disk passthrough (known as "IT Mode").
To reflash the H200, I followed the instructions here.
They are extremely wordy, to the point of confusion, but I managed to perform
the reflash without error. The one issue that I encountered was that the
megarec.exe command would hang on the R510 and never progress. I finally
removed the H200 from the R510 and put it in an older PE2900 machine, where I
could reflash it without problem. Once reflashed, the H200 was moved back to
the R510. This apparently happens with some R610 and R710 servers as well.
Note that the reflashed H200 will no longer display a configuration option at boot time; it simply passes through the drives with zero configuration required.
One disappointment was that the reflashed H200 is no longer considered a "Dell
device" and the R510 wont recognize it in the internal storage slot. I
relocated the controller to one of the standard PCI slots and it worked
without problem. I would like to revisit this in the future. It may be that
reflashing the H200 as a Dell SAS card (
6GBPSAS.FW) will allow IT Mode as
well as allow the H200 to return to the storage slot. I don't know.
Dell iDRAC 6 Enterprise
The DRAC provides a means for out-of-band management of the R510.
I configured the DRAC to have an IP address of
Installing FreeNAS 11
With the controller reflashed and working well, it is time to install FreeNAS 11. There are many guides to installing FreeNAS, so this section is light on particulars. Refer to one of those guides if you need assistance with the basic install of FreeNAS.
FreeNAS 11 is intended to be installed on flash drives rather than hard drives, thereby conserving drive bays. I installed two 32GB flash drives in the R510's internal USB ports inside the case; they are located on the other side of the status display. This makes for a neat configuration.
I made a bootable USB drive containing the FreeNAS 11 install image, booted to the install image, and installed FreeNAS to a mirrored ZFS pool which used the two internal flash drives.
During the install, I named the machine
The R510 has two GbE ports: GbE1 and GbE2. I assigned GbE1 to FreeNAS with an
192.168.88.21; this will be the management address of FreeNAS. GbE1
and GbE2 map to the BSD devices
FreeNAS Network Configuration
I ran into numerous problems trying to make SMB and AFP coexist on the same machine. Part of the solution that I reached was to assign each protocol its own IP address.
Under 'Network->Interfaces', choose the interface on which two IP aliases will
be created. In my case it is
bce0. 'Edit' the interface.
Under 'Options', add two aliases: one for
192.168.88.22 (for SMB) and one
192.168.88.23 (for AFP).
FreeNAS SMB Configuration
ZFS Datasets to be Shared
I have two ZFS datasets that I wish to share generally:
/mnt/tank2/pub. I also created a ZFS dataset that will be used to
contain users' home directories:
Configure Users and Groups
Create a Group
khe group, with 'Permit Sudo' checked (sudo is not
necessary for SMB/AFP).
Create a home Directory
Create a home directory for the user in
Create a User
khe user with this information, then click 'OK':
Username khe Primary Group khe Home Directory /mnt/tank2/home/khe Shell bash Password XXXXXXXXX Password confirmation XXXXXXXXX Permit sudo check Auxiliary groups: wheel
It appears that the owner:group of a user's home directory must be set manually:
$ chgrp khe:khe /mnt/tank2/home/khe
Configure the SMB service
Under 'Services', configure 'SMB' as follows, then click 'OK':
NetBIOS name filessmb Description FreeNAS 11 Server Unix Extensions check Zeroconf share discovery check Hostnames lookups check Bind IP Addresses check 192.168.88.22
Under 'Services->ControlServices' ensure that SMB's status is 'Running' and that 'Start on boot' is checked. This is all that's needed to enable SMB. The next step is to share some directories.
The host that I am running FreeNAS on is called
files. By default, the
NetBIOS name used by SMB will also be
files. AFP shares will also appear
files. If these SMB and AFP share names are identical, it
apparently causes problems in which one protocol may hide the shares of the
other. Make sure that the 'NetBIOS name' above is something other than your
hostname (e.g., in my case, I chose
filesmb so as not to conflict with
files). I suspect this can be handled in the netatalk configuration but
apparently not through the FreeBSD gui.
Create the Public SMB Shares
I wish to share
To create a share for
media. Under 'Sharing->Windows(SMB)Shares1', click
'Add', enter the following configuration, then click 'OK':
Path /mnt/tank2/media Use as home share unchecked Name media Apply default permissions check Browsable to network clients check Allow guest access check
Follow the same process for
At this point, the shares should be visible in your clients. Test them and ensure that they work properly
Create SMB Share for MacOS Home Directories
Due to problems that I encountered, I shared OS-X / MacOS home directories
over SMB instead of AFP. As already mentioned,
Create a new SMB share for the home directories with the following configuration:
Path /mnt/tank/home Use as home share check Name home Browsable to Network Clients unchecked Show hidden files checked
If you've followed all the steps up to this point, your client will
show three shares on the domain
khe. If you aren't interested in Time Machine backups, you can stop
Bonus: on MacOS, you can also check SMB shares from the command line via:
$ smbutil view //filessmb
Configure TimeMachine Backups
With basic SMB sharing completed, it is now time to configure AFP for Time Machine backups over the network.
Assign an IP address to AFP that is not being used by SMB and disable 'home directories'. Click 'Services->AFP', configure as follows, then click 'OK':
Enable home directories uncheck Bind IP Addresses check 192.168.88.23
To be clear: AFP is not being used to share MacOS home directories; this is done by SMB.
Create a TimeMachine Group
Create a group called
tm with otherwise default settings.
Create a ZFS dataset to contain TimeMachine Backups
Under 'Storage->Volumes' create a ZFS dataset named
tm with a quota of
1000 GiB. I named mine
Once created, change 'Owner (group)' to
tm, which is the group we just
created. Change the permissions to:
Create TimeMachine User(s)
You must create a new user for each device that will be backed up with TimeMachine. I was really turned-off by this, but was never able to get AFP to handle home directories properly and have SMB work properly. This seems to be due to netatalk having a somewhat aging CIFS implementation. There are so few users on my network that I concluded it really wasn't that much of a bother.
Initially, I will backup two devices to Time Machine. First, I created the
laptop with the following configuration:
Username laptop Create new primary group... unchecked Primary Group tm Full Name KHE laptop Password MYSECRETPASSWD
I created another user,
desk, in a similar manner.
Create the AFP Share
Under 'Sharing->Apple(AFP)Shares->Add' create the share:
Path /mnt/tank2/tm Name TimeMachine Allow List @tm # 'tm' group members Time Machine checked Default file permission 775 Default directory permission 665
Confirm the AFP Share
Optionally, on the FreeNAS server, you can see available AFP shares from the command line via:
root@files:/mnt/tank2 $ dns-sd -B _afpovertcp._tcp Browsing for _afpovertcp._tcp DATE: ---Fri 08 Dec 2017--- 17:05:53.067 ...STARTING... Timestamp A/R Flags if Domain Service Type Instance Name 17:05:53.067 Add 2 2 local. _afpovertcp._tcp. files ^C
Make a TimeMachine Backup
At this point, the share should be available for use by any Apple device on
the network. Confirm this by opening your Time Machine settings. Under
'Select Disk' (in my case)
TimeMachine on "files" should appear. Note
that the share will not appear in the Finder.
When you choose the
TimeMachine on "files" for backups, you will be
prompted for a user name and password. This will be the
created above, not the SMB user created earlier.
LSI 9211 Notes
Here are a few random notes on using something other than a H200, but still based on the LSI 9211 chipset.
A comparison of LSI HBA features is here.
A great article on reflashing an IBM M1015 to a LSI 9211-8i in IT or IR mode is here. I followed this guide without error when flashing my M1015.
Here is a guide to many cards based on the LSI 9211 chipset. This is the best written and most complete article I've run across.