Looking for updated TrueNAS content? Check out my newer post here: TrueNAS 12 & ESXi Home Lab Storage Design
Note: These steps are still mostly valid as of TrueNAS 12 and ESXi 7.0 release.
It’s been over 2 years since my previous guide on setting up iSCSI between FreeNAS and ESXi and in that time many things have changed. FreeNAS now has a new UI, making things simpler and more straight forward. I think we can all agree the prettier graphs are extremely important too.
In this guide we’ll evaluate if FreeNAS is still the best solution for your storage needs and explain why iSCSI performs best, followed by complete setup instructions for a killer multi-path redundant link iSCSI config. FreeNAS is great but as with most things, there are pros and cons so lets get them out of the way as clearly as possible.
Pros
- Built on FreeBSD using ZFS file system, incredibly robust combination
- ZFS uses lots of ram (instead of leaving it idle)
- Copy on write file system, very resilient to power failures
- Bit rot protection through check-summing and scrubs
- ARC (adaptive replacement cache) gives excellent read performance, can be expanded to L2ARC with an SSD or Optane drive
- ZIL is basically a fast persistent write cache and can also be put on an SSD or Optane Drive
- lz4 compression is very fast and with VMs it’s not unusual to see 50% compression ratio
Cons
- Requires a beefier system than traditional HW/SW raid solutions, 8-16GB ram is the low end
- You can’t simply add/remove additional drives like you can with Unraid and MDADM
Is FreeNAS Right For You?
As of writing this in November 2019 I would lean towards Unraid for a pure file/media server. Due to it’s all in one nature, good VM support (running VMs on FreeNAS isn’t worth your time in my experience) and ability to add/remove disks it’s well suited for media/home use. However when performance starts to matter, FreeNAS shines with plenty of options to speed up VMs and/or databases and get great IOPS performance out of your spinning rust.
If you’re in the planning phase of your build, sticking to NAS hard drives for any setup is recommended. I recommend WD Red and Seagate Ironwolf
The Config
The goal is simple. We’ll be setting up 4 links between FreeNAS and ESXi (very easy to adapt this to 1-2 link setups). This supports failover and multipathing for a solid speed boost. I’ll show you some tricks to get better performance out of your setup along the way too!
The network I’ll be using for iSCSI is a simple 4 direct links between 2 servers. It’s best practice to have your iSCSI network separate from every other network.
FreeNAS Cable ESXi 10.0.0.1/30 <----> 10.0.0.2/30 10.0.0.5/30 <----> 10.0.0.6/30 10.0.0.9/30 <----> 10.0.0.10/30 10.0.0.13/30 <----> 10.0.0.14/30
While I’m doing this with 1 gbit links I would highly recommend picking up some 10gbit SFP+ cards for low latency high throughput iSCSI. Here are a couple of affiliate links, if you can get them used <$50USD you’re getting a good deal.
Amazon – HP 593742-001 NC523SFP 10GB 2-PORT SERVER NETWORK ADAPTER
Amazon – HP 586444-001 NC550SFP dual-port 10GbE server adapter
FreeNAS Network
For the first step, I’m going to configure the IP addresses on my 4x connections I’ll be using for iSCSI. I’m using a /30 subnet meaning 2 hosts, perfect for this setup. You can configure it with any addresses you like as long as it’s reachable from ESXi and each link is on a different subnet.
Go to Network > Interfaces > ADD
Select your NIC, name it (I like to use the NIC as the name), set your IP and subnet mask. Repeat the ADD step for each link you’ll be using.
As a note, for additional performance you can add “mtu 9000” in the Options field. This will tell it to use Jumbo Frames and can result in higher throughput and lower CPU usage. However it can cause issues on some systems.
FreeNAS Storage
Next step I’m going to create a storage pool, if you’ve already done this skip this step.
Go to Storage > Pools and click the ADD button to create a pool.
Click CREATE POOL
Here you can name your pool. Then add the disks by selecting them and then using the right arrow to move them across. Select the raid type. In my case I’ll use mirror, however with more drives you can do raidz/raidz2 or a stripe of mirrors (raid10 like) for performance.
After you create your pool, click on the 3 dots next to it and select “Add Zvol”.
It’s recommended to not exceed 50% of your storage size for your iSCSI share so I’m going to make mine 115GiB
FreeNAS iSCSI
1. Time to setup iSCSI, we’ll first need to create a portal. To do this go to Sharing > Block (iSCSI) > Portals and click ADD.
Use the “ADD EXTRA PORTAL IP” option to allow you to add as many interfaces as you’ll need and type their IP Addresses in.
2. Next is Initiators > ADD. Here I’ve added a subnet that covers all my iSCSI networks, however you can leave it saying ALL and it’ll work A-OK for you.
3. Now on to Targets > ADD. We’ll fill in a name and set the portal group ID and initiator group ID to 1.
4. Extents > ADD. This is where we can pick our storage zvol under device. If you’re using HDDs you may like to select the LUN RPM however it’s not going to change anything I’m aware of.
5. Last step before we turn iSCSI on. Go to Associated Targets > ADD and select your target and extent from the lists. Set a LUN ID and save.
6. Go to Services and enable iSCSI, also checking the Start Automatically box.
ESXi Config
It’s time to configure ESXi, we have to setup the networking and attach the iSCSI storage to it.
ESXi Networking
To network this up to FreeNAS with the 4x links I’m using I’ll make 4 vSwitches. Go Networking > Virtual Switches > Add standard virtual switch.
Name your interface and pick the appropriate uplink. Note that if you use the “mtu 9000” option in your FreeNAS interface, you’ll have to set the mtu to 9000 here too.
We need to make 4 VMkernel NICs now. Click on VMkernel NICs > Add VMkernel NIC
Pick a name and select the appropriate vSwitch. Under IPv4 settings configure the static address and subnet to match the FreeNAS system. If you’re using mtu 9000, set that in the mtu aswell.
Repeat this for every interface and you’re good to go on the networking front.
ESXi iSCSI
iSCSI setup on ESXi is rather simple. Go Storage > Adaptors > Software iSCSI.
Enable iSCSI and fill in the 4 port bindings to the VMkernel NICs we created before. Then add dynamic targets to the FreeNAS IP Addresses.
Upon closing and opening the Software iSCSI configurator, you will see it picks up FreeNAS iSCSI share in the Static Targets area.
We have to add a datastore to finish this off. Go to Storage > Datastores > New datastore.
On the first page click next. It may take a minute, then you should see your FreeNAS zvol we created show up. Pick a name and click next.
Use the full disk with VMFS6 file system, go next and finish.
Guess what? It’s ready to test out!
iSCSI Testing and Tuning
To get higher speeds we need to set it to round robin and make it change it’s path every iop. By default it does it every 1000 iops.
To change this we need to go to the command line and input 3 commands. 3 commands isn’t that scary right?
Go to Manage > Services and click on TSM-SSH. Click Start above it to start the SSH service.
Open an SSH program like PuTTY and input your ESXi servers IP Address.
Click yes to the popup and login with your ESXi credentials that you use on the web interface. Type in the following.
esxcli storage nmp device list
looking at the output shown, find the FreeNAS iSCSI share and look for the naa ID numbers I’ve drawn around, taking note of them. Also note the bottom red box, we’ll change this setting.
Now paste in the following command, replacing the NAA_HERE with the contents of the red box starting with naa.. This will change the iSCSI mode to round robin
esxcli storage nmp device set --device NAA_HERE --psp VMW_PSP_RR
Following that command, put in this one to set the IOPs to 1.
for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep NAA_HERE`; do esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=$i; done
After those 2 commands, type the very first one again to confirm it’s working. You should see something similar to the following.
After making these changes we can see that the performance is almost double. It’s far from 10gbe for me but with 2x 10gbit links you could achieve amazing results. Note that I’m using 2x 250GB Seagate HDDs from 2006 that are mirrored for this tutorial.
This Post Has 33 Comments
Thank you for sharing your step by step guide on how to configure the FreeNas 11.2 along with ESX 6.7 for optimum performance.
I don’t have access to dedicated hardware to build a FreeNas host and was thinking of using FreeNAS on a VM running on top of my lab ESX 6.7 host which does have a couple of 2x 10gbit NIC so I am thinking since the local network is running at 10gbit the performance should be reasonably good.
Will be grateful if you are able to shed any light on the above.
Thanks!
Hi Manoj,
As long as you pass through a PCI-E based SATA controller to the VM so it can see the hard drives directly it should work well.
For iSCSI I believe running it on an internal network rather than over the NICs would be preferable then as there would be no speed cap. You could create a vSwitch with no NICs and put both management for ESXi (for iSCSI) and FreeNAS iSCSI on this vSwitch. I hope that makes sense and note, I haven’t tested it so I’m not 100% on it.
Good luck and thank you!
John
Hi John!
I’m having trouble reconciling these two statements:
“running VMs on FreeNAS isn’t worth your time in my experience”
“However when performance starts to matter, FreeNAS shines with plenty of options to speed up VMs and/or databases and get great IOPS performance out of your spinning rust.”
Is FreeNAS ok for VMs or no?
Thank you! Looking forward to trying your write-up.
Hi Ahmed,
My apologies there. In the first statement I’m talking about running VMs on FreeNAS directly in bhyve. In the 2nd I’m talking about running VMs in ESXi with FreeNAS as the connected iSCSI storage.
Thank you!
Hi,
You mention not to use over 50% of capacity, due to space reclamation. In regard to attached thread link, is it a misunderstanding that if one use VMFS6, freenas with iSCSI, that one could use 100% of the disk, and esxi will reclaim the space that is freed?
https://forums.servethehome.com/index.php?threads/zfs-and-vmfs6-space-reclamation.26370/
Best Regards
Benjamin
Hi, I’ll need to do some more research on this before I can give an answer. Thanks for your comment.
John, just curious, did you find anything additional regarding this point about space reclamation?
Thanks for this tutorial! Is it hard to adjust this to a 10Gbit between Esxi and FreeNAS and Esxi and the normal network switch?
And you said for FreeNAS 8-16GB memory is low end. How much is prefered then? 32GB?
Thanks in advance!
It should work fine for 10GbE however some people run in to trouble with jumbo frames (9000MTU).
Try it for the extra performance but if you have trouble, MTU is a likely culprit.
As for RAM size, it depends on your storage size. For 8GB I wouldn’t do over ~12TB pool. for 16GB maybe 30-40TB.
Some say 1GB RAM per TB but it isn’t based on any facts.
Due to the way ZFS uses RAM, going to 16GB (or 32GB on a large pool) will make a noticeable difference for running many VMs vs less RAM.
Good luck!
Hi John,
I have been looking for something like this for my setup.
I have mine cobbled together, but the performance is abysmal.
I want to try your setup.
The difference with mine is I have 3 supermicro servers. Two are esxi hosts, and one is the freenas.
I have 2 10G nics in each server, plugged into a DLink DXS-1210 10G switch.
How would I adapt the networking to make this a better setup? Currently I have 4 nics in each of the three servers. 2 1G nics, and 2 10G nics.
I want to use the 10G nics for iscsi only, and the 1G nics for everything else. Ideally I’d like to use the 10G for vmotion as well.
Hi John,
I ditched the previous idea of building a Virtualize FreeNas and got myself a baremetal FreeNAS box to connect my 2 ESXi home lab hosts.
In your setup, I see you have 4 nics on your FreeNAS box, whereas on my motherboard, I only have 4 factory fitted nics and no slots for adding any more. So I am curious to find out how I could maximize network traffic considering I need to be able to manage the FreeNAS box using one of the 4 nics I have. I could create separate VLANs but then I will need a L3 switch which I want to try an avoid that. Worst case I will have to live with a single path on one of my ESX hosts.
On my ESXi hosts, I have 4 nics split as below and was going to use vNIC2 and 3 for iSCSI traffic.
vNIC0 ——–> Management
vNIC1 ——–> VM Network
vNIC2 ——–> Free
vNIC3 ——–> Free
Any ideas you have on how I should configure the multi-pathing.
Thanks!
Hi, You don’t need a layer 3 switch for VLANs, a smart or managed layer 2 switch will suffice.
Then you can setup Management and VM Network on one port, different vlans, leaving 3 ports free for iSCSI.
vNIC0 ——–> Management vlan 10, VM Network 20 vlan 20, VM Network 30 vlan 30
vNIC1 ——–> iSCSI vlan 800
vNIC2 ——–> iSCSI vlan 800
vNIC3 ——–> iSCSI vlan 800
A Cheap TP-Link smart switch with VLANs isn’t too expensive and in my experience they’re great for home use.
Good luck!
I have a nuc with 64gb ram. Installed 11.3 yesterday in a lab. And deployed 2 esx vms. I have the vmkernel and the freenas on the same layer 2 network. For some reason, even though both esx servers can see the iscsi target, only 1 can see the storage device at a time. Any ideas? Thank you!!
Hi, iSCSI isn’t file sharing, it’s block level storage sharing. Therefor only 1 ESXi system is able to use the iSCSI share at a time.
Hi John,
Great posts in your blog and very informative.
Just a note on your reply here, are you sure about this? From other sources, I have read that at least having VCentre managed ESXi servers, multiple ESXi’s can access the same LUN.
– setup and configure iscsi on the freenas device.
– from your host go to the configure tab, select storage adapters, on the right there should be ‘iscsi software adapter’ if not there’s an option at the top of the right side to create one.
– select the iscsi software adapter, click the targets tab, click dynamic discovery, click add, throw the ip and port of the freenas iscsi in, click ok, rescan storage adapters.
I’ve read too some comments that hosts don’t even need to be in a cluster to allow this.
Thanks,
if you configure your ESXi hosts in a cluster in vCenter, you can have the entire cluster utilize an iSCSI-based SAN. You will have to set up iSCSI on each host, and in so doing point each host to your FreeNAS SAN with the correct iSCSI targets. This will make the iSCSI datastore available to each host in your cluster. You can even skip the clustering part if you really want, bu clustering enables a lot of other nice features in vCenter.
Im trying to get my brain around virtualizing freenas on an esxi server then passing the drives back to the server. My current thought is:
Esxi server with 1 ssd (hosting freenas) then passing the 3 larger traditional hard drives directly to freenas to create the pool. Connecting that pool to the esxi host via iscsi with virtual 10GbE nics and using that pool to store my media and other virtual containers.
Is this the gist of your tutorial?
In this tutorial I cover using two separate physical systems but most elements can apply to a virtualized FreeNAS one too.
You will only need a single virtual NIC between host a VM as there isn’t a cap on the speed that I’m aware of. It says 10GbE however it may get above that if the system is fast enough.
Good luck!
Hi John,
Thanks for uploading this article.
I have configured Freenas in my lab network (VM on Hyper V, installed on HP workstation in same network )
I am not able add this Freenas volume in esxi (bare metal).
Configuration of iSCSI software adapter has been completed.
Can see dynamic and static target, but unable to find the freenas drive when adding VMFS datastore.
Could you please let me know why this issue happening.
Hi, I would like to give you a quick answer but I’m not sure why it isn’t working for you.
I don’t have an ESXi lab setup now otherwise I would look at options that may be preventing you from seeing the datastore.
Good luck!
I had a similar problem. My fix was to make sure I had 9000 MTU for ESXi vswitch + vmkernel + FreeNAS interfaces, then 9216 MTU on my Netgear switch (9000 for ‘data’ +216 for ‘TCP overhead’. My mistake was Netgear switch was 9000 for FreeNAS ports but I missed the ESXi ports and left them at 1500. Make sure your entire path is 9000 MTU!
Thanks for your tip. You saved my day. After setting my physical switch port MTU to 9216, I ended up success.
FreeNAS installed on a VM will be used to create an iSCSI target in this example. The iSCSI target is then connected as shared datastore to the ESXi host. FreeNAS is a free distribution based on the FreeBSD operating system that provides a web interface for creating and managing network shares. Download the ISO installation image from the official site and place it to the
Hello, quick question– in freenas/truenas performance wise (transfer speed) which is better? a) Setup an iscsi block share directly to a disk b) Setup disk in ZFS pool > zvol and present the zvol as an iscsi block? Thanks
Hey John, really enjoyed this article and it helped me getting off the ground with my setup. One question I had though, I recall some time ago, reading an article that said 1 large ZVOL is not suggested. Instead, partition to multiple smaller ZVOL’s. Is this your experience, suggestion as well? For reference, I have 23 TiB storage available, I would love to present 23 TiB as one Datastore to my ESXi environment, but will not do it if it is not best practice.
Thanks!
Hi, I think when it comes to iSCSI sharing one large zvol can cause fragmentation. The reasons I can’t remember too well.
I wish I could help more there, good luck!
Hey John, Thanks for the very good tutorial. I was using NFS sharing but i was facing very poor write speeds. Just to let you know as i have Freenas running as virtual machine in ESXI (for over 3 years now with SATA passtrough) at supermico board X11SSH-LN4F. I had to create first 2 network adapters for freenas and connect these adapters to the virtual switches same as you have done above. I used 2 in stead of 4 and i asume 1 was maybe also enoug. Then started setting up the adapters and iSCSI in freenas and in the end also in ESXI. There was a small moment in ESXI adding the iSCSI was not working in the adapter setting. Now it accepted as static and dynamic only as software ISCSI
In the end everything is working very good. speeds over 600mb/s. i assumere here is the 16GB of ram in freenas the buffer and it takes the maximum SATA speed. it is just a guess.
Hey John, what’s the step-back/recovery steps for the ESXi round robin? If I set it and then decide I don’t want the round robin, how do I un-round it?
Hi, I don’t have an ESXi box in my house right now to test this with but I would be guessing it’s this:
esxcli storage nmp device set –device NAA_HERE –psp VMW_PSP_MRU
Remember to change the NAA_HERE, good luck. You can also set iops back up to 1000 the same way it’s set to 1 in this guide.
This is a very informative post about how to set up and optimize iSCSI between FreeNAS and VMware.
Couple of best practices from VMware for 6.7 and 7.0 while doing this (I’ve been using VMware for about 10 years now):
1) if you’re going to go the multiple vSwitch method, your initiators and targets should /not/ be on the same subnet. In this case, use different broadcast domains and do not do port binding
2) if your hosts (initiators) and targets are on the same subnet, you should use a single vSwitch and make sure you configure a 1:1 relationship between each iSCSI VMkernel adapter and physical NIC. This means port binding, and also making sure your teaming and failover policies have only one active NIC per port group.
3) if you use vCenter, you should use a Distributed switch with the # of uplinks equal to the physical NICs you have for iSCSI; then create one distributed port group per iSCSI VMkernel/physical NIC combo. Then inside each port group, make your VMkernel adapter, enabling iSCSI, and binding it to a single unique iSCSI physical NIC. You should then still edit the teaming/failover policies in the port group to ensure that only one unique NIC is active per group – it should be the NIC you’re dedicating to that particular port group/VMkernel adapter.
4) do /not/ try and use teaming with port binding, and do /not/ use LACP LAGs with iSCSI under any circumstances
Thank you for very good manual:
truenas 12
3x10Gb NIC
Seq Q32T1 : 2.1 GB Read, 2.5 GB Write
Wonderful tutorial! It is still really helpful. Thank you for explaining it so well. Espacially the round robbin part is important! I just need a few more NICs to improve performance.
Thak you so much!
Thanks for the earlier version of this tutorial. It got my Freenas 11 – Esxi 6.0 combination up and running nicely. I want to expand the number of links (2 to 4) so I’m paying another visit.
I’ve got a few questions . . .
– I see the screen shot with software used to measure performance. What are you using here? I’ve always relied on the Freenas GUI to get an idea of throughput speeds much I’m not really sure if that is giving a fair picture.
– This is a bit outside the scope of your tutorial but – should I be setting up a seperate pool on the Freenas side for each VM? At the moment I’ve got three pools and each stores 2-3 VMs. This makes it hard to understand (a) space allocation and (b) manage features like snapshots.
For (a) your suggestion is not to use more than 50 of the pool for VM. I guess that is read on the Freenas side (so 500 GB as a pool should not have more than 250Gb of VM)?
(b) under this kind of setup should I even be thinking of snapshots on the Freenas side or just manage this from the Esxi?