In 2017 and 2019 I made guides for setting up iSCSI between FreeNAS and ESXi and they have been some of my most viewed content. In this article I cover a solid solution to storage configuration for a two PC home lab setup running TrueNAS and ESXi.
The IT landscape is changing all the time and just because something was good several years ago, it doesn’t mean it’s good today. However time has been good to both iXsystems TrueNAS Core and VMware ESXi as today they’re both great options. No longer is TrueNAS and ZFS difficult to run, even a dual core PC with 8GB ram and a few hard disks can net you a pretty decent storage server. In fact a Raspberry Pi 4 gets passable performance with ZFS on linux now.
Storage Design
The storage design I am using consists of NFS for VM OS storage on ESXi. However for higher performance secondary drives I use iSCSI directly to the VMs. This nets higher IOPs, especially on write speeds.
NFS Pros:
- File share rather than block level
- You can snapshot each file and roll back easily
- Space is only used as you create VMs, not a large chunk up front
- Multiple machines can connect to one share
NFS Cons:
- Lower write IOPs
- An average of slightly lower performance on reads
iSCSI Pros:
- Significantly faster write IOPs
- Faster overall average performance
- File system is setup on the iSCSI client, presents as local storage
iSCSI Cons:
- Block level storage, it’s not file sharing
- Harder to snapshot and backup multiple VMs
- Space all taken up at creation (thin provisioning is supported but costs performance)
For a good performance comparison (albeit with XCP-NG in 2018) check out this video.
TrueNAS Pros and Cons
Lets go over some of the Pros and Cons of TrueNAS for VM storage:
Pros
- Built on FreeBSD using ZFS file system, incredibly robust combination
- ZFS uses lots of ram (instead of leaving it idle), you will keep seeing benefits from adding more ram
- Copy on write (COW) file system means fast snapshot creation/restoration
- COW file system also helps in being resilient to power failures
- Bit rot protection through check-summing and scrubs
- ARC (adaptive replacement cache) gives excellent read performance, can be expanded to L2ARC with an SSD or Optane drive on high end systems
- ZIL is basically a fast persistent write cache and can also be put on an SSD or Optane Drive
- lz4 compression is very fast and with VMs it’s not unusual to see 50% compression ratio, improving read/write speeds and meaning less space taken up
Cons
- Requires a beefier system than traditional HW/SW raid solutions, 8-16GB ram is the low end.
- You can’t simply add/remove additional drives like you can with Unraid, Synology Hybrid RAID and Storage Spaces,
- Never use ZFS with SMR hard drives, as Will Taillac explains here on ServeTheHome it’s a bad idea. Be careful when selecting your disks and go CMR. Note the RAIDZ Resilver Time specifically at the end of the linked article.
Picking safe drives is easy with a little research. Seagate specifically states on their website what drives are what.
Here are affiliate links to Seagate Ironwolf and WD Red drives are that are solid CRM drives for NAS use.
iXsystems (the creators of TrueNAS) have a range of systems available. Of course these are guaranteed to run TrueNAS excellent. You can browse their rangen here (Amazon Affiliate Links):
TrueNAS Mini E, TrueNAS Mini X, TrueNAS Mini X+, TrueNAS Mini XL+
My Test Setup
ESXi Host | TrueNAS Host |
Ryzen 1700X | Core i5 4590 |
32GB RAM 2666MHz | 8GB RAM 1600MHz |
Broadcom Quad Gigabit NIC + Intel 1Gbit Onboard | Intel Quad Gigabit NIC + Intel 1Gbit NIC |
2x Samsung 860 EVO 250GB SSDs Striped |
Networking Setup
For physical networking I am recommending dual links for any setup otherwise a single loose wire will crash all your VMs. Dual SFP+ 10gbit cards are pretty cheap now. Here are a couple with cheap used options on Amazon as of writing: Link 1 and Link 2.
Here is the simple network diagram for my example setup in this guide, note I am still on peasantry 1gbit links, fine for demoing this setup, and not too shabby overall with 4 of them (essentially 4gbit):
Each Link for the storage network is on it’s own subnet as follows:
10.0.0.1/24 10.0.1.1/24 10.0.2.1/24 10.0.3.1/24
This is the correct way to setup multiple physical links. To give you a good understanding of why they can’t be on the same subnet here is a forum post by jgreco (The TrueNAS forum Resident Grinch). I am using 9K MTU all around which in my testing gives ~8% throughput boost and similar IOPS increase.
TrueNAS Network Interfaces:
ESXi Network vSwitches, Port Groups and VMkernel NICs:
Also here is an image of the vSwitches for storage network once they’re configured.
NFS Setup on TrueNAS
NFS share setup on TrueNAS is not very involved. However there are a few gotchas I’ll cover here.
- On your storage pool (Storage > Pools) make a new Dataset for the NFS share
- Name your storage pool, the defaults are likely fine if you’re using lz4 compression on and dedupe off.
- On the new right of the new datastore click the dots and go Edit Permissions.
- Make sure you tick to enable read/write here.
- On the left go to Shares > Unix Shares (NFS) and click ADD.
- Select the datastore path and give the share a description.
- On the left go to Services and enable NFS. Also tick the box to Start Automatically.
- Click ADVANCED OPTIONS and make sure you have a subnet allowing access to all network you’ll be using.
- Click the pencil icon to get NFS options and tick Enable NFSv4.
- Open the Shell and put in this command making sure you are targeting your NFS datastore:
root@truenas[~]# zfs set sync=disabled ssd-test/ssd-nfs-test
This last command will solve the issue of NFS requesting that ZFS sync all data to disk after every write before doing anything else. ZFS will lie to NFS saying “the data is synced!” when it’s still in ram waiting to write. With ZFS this is generally fine and won’t cause data corruption on a power failure. In a theoretical worst case you may lose up to 5 seconds of write data, but no corruption. However, the benefit is drastically improved write performance.
NFS Setup on ESXi
The setup of an NFS share in ESXi has very few steps.
- On the ESXi management web gui, go to the Storage tab.
- Select Mount NFS Datastore
- Tick NFS 4 as it’s needed for higher performance with multiple links. Fill in information, separating IP addresses with a comma. The path to NFS share is the path to your datastore in TrueNAS.
- Go next, then finish and you will have it working!
iSCSI setup
We first have to setup iSCSI in TrueNAS by making a Zvol and configuring the iSCSI service. There is now a Wizard which makes configuration more simplified.
- Create a Zvol on your storage pool, don’t forget to write GiB when putting in the size!
- Click on the wizard button then fill in the name and select the zvol from the drop down list
- Create a portal with all IP addresses of interfaces you’ll be using
- On Initiator go Next and Submit.
- In Services, make sure iSCSI is enabled and will Start Automatically at boot and your TrueNAS setup is complete
The next steps will depend on which OS you’ve installed. Linux and Windows VMs can both access iSCSI shares. In the near future I will have guides here linked for both of these.
Disk Benchmarks
Benchmarks don’t hold much weight due to ZIL and high ARC hit rate however it can give you some idea of where the difference in NFS and iSCSI performance exists. This benchmark was done in a Server 2019 VM running with 6 cores, 8gb RAM, VMXNET3 vNICs and VMware Paravirtualized Storage. Remember my TrueNAS host only has 8GB RAM.







