DIY: Building a ZFS NAS with FreeNAS


  • Instruction on how to build a semi professional grade NAS on your own.
  • List of selected components including the reasons why I chose them.
  • This article is about setting up the hardware part in the first place and applies to other Systems than FreeNAS as well.
  • I’ll add more detailed articles about the FreeNAS software and ZFS in particular in another article (not written, yet).


First of all: I want to thank the FreeNAS coders for this great software and the FreeNAS/FreeBSD community for providing excellent in depth knowledge to all the topics that came up during my setup.

Who’s building this NAS? I’m a software engineer with experience in server administration and infrastructure or to put it another way: I know what I’m doing but I’m not an enterprise storage professional – so use at your own risk.

Time passes, as does storage space. So I had to decide what to do: Go buy another preconfigured NAS box like QNAP or Synology (which I was quite happy with in the past) or listen to the nerd in me, willing to spend ages in forums and performing hours of excessive deskflipping while things don’t work as expected?

Since you’re reading this article the decision is history already and I’m willing to let you participate in the things I’ve researched so you don’t have to.

I’m trying to pursue a semi professional approach here while keeping an eye on the budget at the same time. This NAS should be suitable for home/small business usage but like any other storage solution and despite a RAIDz you still want to have your data backed up somewhere else as well.

To the facts:


I wanted a free open source solution on a good OS base and found quite some promising projects. Among them were FreeNAS and nas4free. Both of them are based on FreeBSD and thus natively ZFS capable.

An HDD only ZFS without tuning may not be the most performant choice in a setup where a standard ext4-raid5 would do the job but remember what Kennedy said: We chose to build the nas on our own not because it is easy, but because it is hard!

So I’ve searched a lot of forums keeping in mind that the fanboys of the different solutions always tend to be a little biased. In the end I gave up diving too deep into the version dependent differences between these two systems and let my instinct choose and it went for FreeNAS although some stated the ZFS performance with nas4free was better (at least compared to older FreeNAS versions). But ZFS is ZFS so any performance difference should solely result from configuration differences and some ZFS knowledge may come handy from time to time anyway.

Before spending money for new components I decided to first create a virtual setup using vmware player, 5 virtual disks and the FreeNAS image. Everything went as expected, the virtual zpool behaved well, cifs, nfs and iscsi shares worked, so no problems here. After playing around with my virtual NAS for some time I was ready to go shopping:

The Hardware:

I think here is the articles’ most valuable part for you, if you’re planning to set up a FreeNAS/nas4free/FreeBSD-zfs NAS: I’ve spent quite some time to find out which hardware components to use and to compare them with each other keeping budget, power consumption, small case dimension and performance in mind:

  • Case
    • Lian Li PC-Q25B
    • Why I bought it:
      • It’s small (Mini ITX-Board), yet it’s capable of housing a standard ATX format power supply unit. But standard ATX format really means standard ATX format – many PSU’s are bigger than the AXS specifications suggest. So you better get a small PSU with cable management or it just won’t fit, period.
      • It contains 5 standard hdd hot swappable drive bays. The bays are not what you expect from a standard NAS case, though: If a drive fails you still have to open the case but that’s easy and should happen rarely.
      • Depending on the hardware used it could get quite hot in this small box I think. In the setup you see here it hasn’t been a problem at all, though.
      • Besides the hot swap bays there is room for two to three more disks/ssds at the case floor and the mounting concept there is quite clever.
      • The case quality is really good – I’ve definitely seen worse.
      • What else has it? Blue light. What does it do? Shines blue.
  • PSU
    • 430 Watts Corsair CX Series Modular 80+ Bronze
    • Why I bought it:
      • Enough Power
      • Matching ATX specification dimensions
      • Cable Management Version (don’t order the one without!)
      • Active PFC
  • CPU
    • Intel Core i3 4130T 2x 2.90GHz So.1150 BOX
    • Why I bought it:
      • This Core i3 supports ECC RAM. Since ZFS heavily relies on memory it’s not a good idea to use non ECC RAM so I saw that as a must have. If you’re interested: The forums are full of threads addressing the ECC vs. non ECC topic in detail. If you’re not interested: Use ECC!
      • This model supports the AES instruction set extension. This is very important since it minimizes the cpu performance impact when using disk encryption.
      • I wanted the cheap boxed cooler because it fits perfectly into the small case where the PSU is located directly above the processor itself – no room for fancy monster coolers here.
      • For pure file system operations a Celeron probably would have been fast enough but I wanted more flexibility to add some additional software and services on the NAS in the future so I decided to go with the faster i3. So far this seemed a good choice.
      • I chose the “T” version which is slightly slower but uses less power and thus producing less heat.
  • Mainboard
    • ASRock E3C226D2I
    • Why I bought it:
      • ECC RAM Support (Unregistered)
      • 6 SATA3 Interfaces on board
      • 2 Intel 1Gbit NICs on board
      • Haswell 1150 CPU Socket
      • Mini-ITX format
      • You probably won’t find any other board that matches this feature set. It seemed like this is the most exotic component in the whole setup. Lucky enough I was able to find a retailer able to ship this board from stock.
      • Though it’s a great idea to put an USB socket on the board itself I didn’t use it for the FreeNAS’ system USB stick due to its bad accessibility once the system is finished.
  • RAM
    • 16GB (2x 8192MB) Kingston ValueRAM DDR3-1333 ECC DIMM CL9-9-9 Dual Kit
    • Why I bough it:
      • It is ECC RAM (Unregistered)
      • As the name “ValueRAM” suggests it’s not the fastest or fanciest RAM out there but it does the job and no matter what might be slow in your NAS: It’s not the RAM.
      • Be aware that there is a big difference between “unregistered” and “registered” ECC RAM. These both are NOT compatible. Registered ECC RAM will not work at all on the board mentioned above. I know that for sure since I accidentally clicked on the wrong item and they first send me the registered RAM (that was some serious deskflipping there)
      • 2 Units with 8 Gig each. The board only has two slots and reliable ZFS resources suggest 1GB RAM per 1TB of Storage as rule of thumb (except you plan on excessively using ZFS’ deduplication feature) so that’s a good match.
  • Hard Disk drives
    • 5*3000GB Seagate Desktop HDD ST3000DM001
    • Why I bought it:
      • I already had 2 of them lying around and they seemed suitable enough for use in a NAS so I got 3 more of them.
      • Note: With activated power management I’d suggest to keep an eye on S.M.A.R.T.’s:
        • head cycles count. AFAIK this is the count how often the disk’s heads are parked and activated again (you can hear a chirping sound when the disk does that). 
        • spin up/down count. A head cycle count does not mean that the disc spins down – that depends only on your power management level you set for the drive and on the disks firmware that interprets it in whatever way. The resources on this topic are nearly endless. Based on what I read and based on the fact that a NAS with a lot of services running on it always wants to write something to disk anyway I’m about to recommend to get a drive whose power consumption is low enough to just let it run without any power saving activated at all.
  • SSD
    • 128 GB Crucial Realssd C300
    • Why I bought it:
      • I bought this drive about 2-3 years ago for use in a workstation pc but its S.M.A.R.T. data states it’s still in good health so I decided to use it as my ZFS turbocharger.
      • Though not highly recommended I splitted the drive into a small 8GB partition for the ZIL (ZFS intent log) and the rest for L2ARC (2nd level adaptive replacement cache) because I already maxed out the 6 on board SATA ports with the 5 HDDs and the SSD. In my case that speeded things up significantly but that’s not part of this article anymore – I’ll write another one about that.
  • USB Stick to hold the FreeNAS
    • Sandisk Cruzer Blade 8GB
    • Why I bought it:
      • I know it’s working with FreeBSD (also on USB 3.0) and I can boot from it
      • It’s big enough
      • I’d preferred a much smaller version of this stick which is available as well but it wasn’t on stock and I couldn’t wait :-)

That’s it so far. I hope this post helps you in some ways. I also added some pictures I took while building this nice little thing up – feel free to comment/ask!

About these ads

21 thoughts on “DIY: Building a ZFS NAS with FreeNAS

  1. Nice guide, but I suggest benchmarking with and without the SSD. 90% chance your SSD isn’t going to give you ANY benefit in performance.

  2. @HazCod : Thanks for your comment! Actually I did (sort of): At first I used the SSD as pure L2ARC (so no ZIL at all) since many forum posts suggested not to split SSD devices for performance/flushing reasons. The read performance already maxed out the 1Gbit LAN and since port trunking is no option here, that was nothing to worry about. But writing even big single test files to the NAS via cifs slowed down to around 50MB/sec. After partitioning the SSD and giving 8GB to the ZIL I’m now able to immediately max out the LAN on cifs writes as well. I have to admit that this isn’t serious benchmarking at all but the difference of at least twice the speed it was before is remarkable. To be honest: It doesn’t make complete sense to me as well since zpool’s iostat showed that the ZIL usage while copying it not really noticeable, yet something does happen and it’s much faster than before. On the other hand: ZIL usage on my iSCSI shares is quite heavy and since the SSD is just way faster than the HDDs it should inevitably have a positive overall impact on performance. Still it’s quite interesting: In case I remove the ZIL/L2ARC I’ll do some more in depth testing and post my results here.

  3. i will suggest, you have a look at the HP Proliant N54L, which does the above, on a budget. it supports 4+1 optical, 5 drives if you modify the firmware to do it. quite popular with FreeNAS and other Home Servers as well.

  4. @Michael Guy: Cool! That’s a neat piece of hardware I didn’t know,yet – thanks for giving this hint! The RAM seems to be limited to “only” 8GB. Based on the suggestion 1GB RAM per 1TB zfs pool storage that might be a problem based on how you use your NAS but I guess it’s enough for the normal SOHO user scenario. The price and power consumption looks really good. Definitely something I’d recommend when someone is planning on building a NAS…

  5. @Matthew: Thanks! Not yet – I’m waiting for an ampere meter from a friend of mine – after that I’ll post the results here. I disabled HDD spindown to save load cycles and I hope it’s around 80 Watts – I guess it’s not too realistic to assume the consumption is any lower than that but we’ll see :-)

  6. @Matthew: I’ve measured the power consumption by now: It’s between 65 and 75 Watts when being idle (I disabled HDD spindown to save load cycles) and goes up to around 100 Watts when putting load on both HDD/SDD traffic and CPU. So that’s basically what I’ve been expecting…

  7. Thanks! Thats within the ballpark I was hoping for. My current box, running 4x2TB drives and an Athlon II X2 235e Processor, idles at around 90 watts! I will either go with your mobo/CPU selection, or wait a couple years for AMD’s ARM solutions to really come out.

  8. @Matthew: I think there’s a lot potential so save some more power with my setup: I didn’t choose extra power saving HDDs (since I already had two of the drives) and there are lots of tuning options regarding HDD advanced power management. I also thought about enabling WakeOnLan – in standby the NAS only consumes 4 Watts. I’m confident you could get it down to around 40-50 Watts with other HDDs and AdvancedPowerManagement enabled.

  9. Have you ever gotten a FreeNAS server to reliably sleep/wake? I tried in the past and I think it didn’t work reliably. Maybe i’ll give it another try, because that would be really the ideal way to go! Most of the time, I keep my server off since I only need it about once a week, but when I do, its annoying to go turn it on (and puts more wear on the hardware).
    My older box was an HP DC7900 with a Core 2 Duo E8400, 4GB RAM, and the same drives, and it idled at around 65 – 70 watts.

  10. @Matthew: I already tried and it didn’t work out, yet but based on my experience with WakeOnLan on other systems I guess that’s just a kind of black magic where you have to perform your BIOS settings during full moon or something in order to make it work -.- I’ll get back to you after my next paranormal WOL tryouts…

  11. Thank you very much for the great guide! I bought the hardware! The build is my beginning of summer project!

  12. @2000Ash2000 I’m happy it helped you. My system is running since then 24/7 without any modification and no problems at all. For now I’m still very happy with this setup :-)

  13. Thanks for this. Have you tinkered with or set any permissions for access to your files? I’m planning on setting a FreeNAS for an office but I’ve heard it’s very fiddly to configure permissions, would love to know your thoughts.

  14. @qasim: I think it depends on how you plan to access your files over your network. NFS, CIFS or any other network sharing protocols imply very different ways of configuring access rights. So FreeNAS doesn’t make that easier or more difficult than other systems I’d say…

  15. @Warren D: Hi! It’s been a while – it was around 900€ / 1000USD but I already had the SSD (which can be much smaller but should be part of the setup) and two of the five discs. So you’ll probably be at roundabout 1300€/1500USD

  16. Hi, thanks for sharing your setup. I also plan to buy this motherboard. But there are some problems mentioned about ASRock E3C226D2I on various forums. Do you have any problems during booting, i.e. hanging, long POST times? Some people reports problems about this motherboard with USB 3 enabled boot. Also did you use the IPMI features of the motherboard which is also reported as not always working as expected?

  17. I have been using a virtualized FreeNAS VM with 4GB RAM for quite a while (performed quite well- but lower performance when running concurrent virtual machines) and looking to build a physical box using recycled parts consisting of a Intel Duo Core 3GHz E8400 cpu and 8GB Ram. I have 4 x1TB 7200RPM SATA drives and a single 128SSD drive which I would like to add for the ZIL/L2ARC caching. I am curious to find out how did you partition the SSD to be able to do both? Do you simply use the FreeNAS GUI to configure this or CLI?
    The main purpose of my NAS will be for providing a CIFS/NFS share and iSCSI Luns for my home training lab using VMware ESXi and intending to connect over a Intel 1GBe NIC across each other using a crossover cable. My older MB does not support SATA3 which I think will make little difference for the drives but not sure what impact it will have on the SSD drive.
    Any guidance and lessons learned will be greatly appreciated.

  18. @butch I had no problems whatsoever, neither with USB3, nor with long POST times (at least not longer than I’d define as normal). I can’t say anything about the IPMI features, though since I didn’t use any of these at least as far as I know…

  19. @Mike thanks for asking. I have quite a similar use case here and even though several forum posts suggested a single SSD for both ZIL and L2ARC probably won’t optimize performance it still did. I came across the same question you did: It’s been a while now so I don’t remember all the details but I think I just had to manually partition the SSD with I think gpart (so there is nothing zfs-specific about partitioning). After that I just told zfs to use the one partition for L2ARC und the other one for ZIL. I’m aware that this isn’t the most elegant or safe setup but it did an excellent and reliable job so far…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s