With FreeNAS’s new interface, this is out of date. I have written an updated one here. Also, I have a guide for FreeNAS, XCP-ng and iSCSI here.
In my home lab setup I’ve currently got 1 FreeNAS box and 1 VMware ESXi box. They’re connected using a multipath iSCSI link on cheap quad-gigabit cards I brought used. This setup works quite well for home lab use and provides a safe enough place to store my VMs. In this article I’ll guide you through the setup process I’ve used to get iSCSI working between FreeNAS and ESXi.
I’ll presume you’ve got a fresh FreeNAS and ESXi install on both systems and quad or dual gigabit links between them.
If you’re still in the planning phase of your FreeNAS system, you can browse NAS Hard Drives on amazon here.
iXsystems have their own FreeNAS systems you can buy here.
iSCSI Setup FreeNAS Side
For for link between the two systems we will want to have 1 subnet per connection for iSCSI to work correctly. We’ll run the network with a /24 subnet like this:
FreeNAS Cable ESXi 10.0.0.1 <------> 10.0.0.2 10.0.1.1 <------> 10.0.1.2 10.0.2.1 <------> 10.0.2.2 10.0.3.1 <------> 10.0.3.2
On the FreeNAS WebGUI go Network > Interfaces > Add Interface.
Name the interface the same name as the NIC: option, type in the IP, select the netmask and finally, type in mtu 9000 in the options. Having an MTU of 9000 will help with performance.
Do this for all interfaces you’re planning on using for iSCSI.
Now we can setup iSCSI itself, go to Sharing > Block (iSCSI) > Portals > Add Portal. You’ll want to click Add extra Portal IP at the bottom and add all your interfaces.
Go to Initiators > Add Initiator and click OK, that is all here.
Next up, Targets > Add Target. Pick a name and select your portal group and initiator group.
Under Extents > Add Extent you’ll need to pick either a device (zvol) you’ve created or create a file to share. In my case I’ve created a zvol to use as a device extent.
Go to Associated Targets > Add Target / Extent and add the target and extent you’ve created.
The final step is to enable the iSCSI service by going to Services and clicking Start Now for iSCSI. Also tick the Start on boot box.
You should now have FreeNAS ready, we can move on to ESXi.
ESXi Network Setup
Open up your WebGUI for ESXi 6.5 and navigate to Networking > Physical NICs. Take note of the vmnic numbers on the adapter you’re going to use. You can easily tell that vmnic0 is the odd one out and there for not part of my quad-gigabit card in this picture. MAC Addresses on the same card will generally only have the last hex value incremented by 1.
Once you know what ports to add, go to the Virtual Switches tab and click Add standard virtual switch. You’ll want to set the MTU to 9000 as we did in FreeNAS. Make a Virtual Switch for each network connection you’ll have.
In the top tab, go to VMkernel NICs and click Add VMkernel NIC. Fill in the name, select the switch, put the MTU to 9000 and give a static IP. Create one for each Virtual Switch.
Plugging in the Ethernet cables can be tricky if you don’t know what physical interfaces have what IP. You can look on the Interfaces page in FreeNAS and Physical NICs page in ESXi to see what links are up/down. Also, you can try pinging from FreeNAS Shell to test connections.
iSCSI Setup on ESXi
Now that the network settings are out of the way we can configure iSCSI itself. Go to Storage > Adapters > Configure iSCSI and check the enable box. Under Network port bindings add all of your connections. Also, add all your FreeNAS iSCSI IPs to Dynamic targets. Click Save Configuration and when you go back in it should look like this (The part in blue will auto fill once you save and click on Configure iSCSI again).
Once you’re out of that, click on the Datastores tab and go New datastore.
- Create new VMFS datastore
- Select the iSCSI share from FreeNAS and Name it
- Use full disk
- Finish
You should be able to use the datastore now however there is some more steps to use a round robin configuration for optimal performance.
For the next steps we will enable SSH and set iSCSI to use round robin. To enable SSH go to Manage > Services and click on TSM-SSH. Above the list of Services you’ll see the option to Start it.
Configure Round Robin Path Selection
If you’re on Windows, grab PuTTY and SSH in to the ESXi box. Here you can type in this to get the ID of the iSCSI share.
esxcli storage nmp device list
The information you want is the naa.id I’ve highlighted in red below.
naa.6589cfc00000009e27c03355442167c8
Device Display Name: FreeNAS iSCSI Disk (naa.6589cfc00000009e27c03355442167c8)
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=off; {TPG_i
Path Selection Policy: VMW_PSP_MRU
Path Selection Policy Device Config: Current Path=vmhba64:C4:T0:L0
Path Selection Policy Device Custom Config:
Working Paths: vmhba64:C4:T0:L0
Is USB: false
To change the path selection police to round robin you can use this command replacing NAA_HERE with your naa.ID
esxcli storage nmp device set --device NAA_HERE --psp VMW_PSP_RR
Next we can change the iSCSI IOPS setting from the default 1000 to 1, doing this will help a lot with performance. To do that, get the naa. plus the first 4 numbers (in my example, it would be naa.6589) and placce them in to this command.
for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep NAA_HERE`; do esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=$i; done
You can run the first command again to make sure everything has worked. If the settings I’ve marked in red are the same, it’s working. You’ll be presented with something like this.
naa.6589cfc00000009e27c03355442167c8 Device Display Name: FreeNAS iSCSI Disk (naa.6589cfc00000009e27c03355442167c8) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=off; {TPG_id=1,TPG_state=AO}} Path Selection Policy: VMW_PSP_RR Path Selection Policy Device Config: {policy=iops,iops=1,bytes=10485760,useANO=0; lastPathIndex=0: NumIOsPending=0,numBytesPending=0} Path Selection Policy Device Custom Config: Working Paths: vmhba64:C3:T0:L0, vmhba64:C4:T0:L0, vmhba64:C14:T0:L0, vmhba64:C9:T0:L0 Is USB: false
Conclusion
Now with this setup you can have a killer NAS and safe, high performance VM storage on a budget. VMs will boot quite snappy thanks to the ARC (adaptive read cache) in ZFS if you have plenty of ram in your FreeNAS box.
This Post Has 33 Comments
Thank you very informative!
Great article, thanks. Great Results by adding the esxi round robin policies. Just a test box remember, but with a basic test box direct connect connected to a Freenas iscsi target at 1gb sequential read went from 117.6MB/s up to 235.2MB/s and write went from 116.8MB/s to 233.7MB/s. Dual nics directly connected via cat5e.
Thank you!
Good results you got there.
how do you set ip and mtu on another NICs when they in same subnet ? only crom cli you can set IPs but no mtu, and from gui you cannot set more from 1 nic
Are you talking about in FreeNAS? I don’t believe it lets you have multiple NICs on the same subnet.
I got around this by having everything on different subnets.
You may be able to look up how to change the MTU in regular FreeBSD and use that on the CLI in freenas.
Best of luck, John
Question, can two esxi host connect to a same Freenas iSCSI Target?
iSCSI isn’t like regular file sharing due to being at block level. You’ll need to have 1 target per client.
If you want that you may need to look in to VMWare VSAN.
John
John Keen,
completely wrong. Multiple Esxi hosts can and do connect to one iSCSI LUN. Its by design as long as your licences on the esx side of things allow it.
That is true, I have never used it due to staying within the free license.
There’s a number of limitation with that as well, such as delays between hosts seeing each other’s changes etc. so not exactly a good solution for reads but esxi and freenas are both more than happy to write to same lun at once.
Absolutely!
This is the bases for VMware’s VMotion. (assuming you’re licensed for it) You connect multiple hosts to the same iscsi datastore and you can migrate your VM’s between different compute hosts while leaving the storage in the same place. I run FreeNAS in a production environment and have 2 main VMWare clusters – one comprised of 6 ESXi hosts connected 2 4 FreeNAS servers (each host has access to all 4 datastores), and the other 8 ESXi hosts connected to 3 FreeNAS servers.
The iSCSI standard will allow multiple connections to a single target, but remember that the client is managing the file system. As John stated, iSCSI is block level storage – meaning that FreeNAS is just sharing a ‘chunk’ of raw storage and it’s the client’s responsibility to correctly manage the file system.
VMware’s VMFS file system was designed to work correctly with multiple clients (ESXi hosts) connected simultaneously to the same file system… Other file systems (like Windows NTFS) were not. If you’re using a windows machine with an iSCSI initiator to connect to a FreeNAS iSCISI target you’re going to want to make sure you stick to the 1 client per target rule (you can still have multiple connections from the same windows machine – IE: iSCSI Multipathing).
The big points here are:
-iSCSI will allow multiple connections to the same extent – these multiple connections can originate from one client or many
-The File System in use (which FreeNAS is completely unaware of) is what determines whether simultaneous connections from multiple hosts are supported
-VMware’s VMFS file system absolutely supports multiple hosts being connected to the same datastore (in fact it’s required for things like VMotion and VMware HA to function properly)
Sorry for my noobquestion and conclusion
Here what i get from q&a
One share freenas iscsi can be used for 2 or more esx1.!! It is correct?
Does round-robin same as agregate link or fail over link?
Thanks for advanxe
Great content!
This has greatly helped me migrate from xenserver to esxi by keeping my freenas storage.
I’m just having trouble recording rates that are no more than 60mbps.
Thanks!
Thank you!
Good luck improving your speeds
Thanks very much for this guide. Very hands-on and so worked smoothly for me — who is not technically trained.
I was more worried about the FreeNAS side but it turned out the VM had a few minor twists as I was a version behind (6.0). All easily solved though since the basic approach is the same.
I’m glad to hear this helped you!
I’ve been watching the Freenas side and the NICs seem to max out at about 200Mbs. So I’m curious where the bottleneck might be. Where you have got 4 connections I’ve got 2. Sorry a few naive questions here – while I’m not feeling any performance lags there are times when users might make quite a big draw on data. Is 200mbs what can be expected? Is this a limit on this protocol (both the disks and the NICs are not being pushed). Is this data tranfers in essence a sum (two connections 400 mbs / 4 800mbs)?
Thanks!
Hi,
The last few days I’ve started getting timeouts on the connection with the following error:
no ping reply (NOP-Out) after 5 seconds; dropping connection
This pattern follows others’ recent posts on the Freenas forum e.g.,
https://forums.freenas.org/index.php?threads/no-ping-reply-nop-out-after-5-seconds-kills-iscsi-volumes-under-load-v11-1.60882/page-2
This is the second time Freenas reports this issue within the space of a few days (first was one NIC and now both have similar issue). I am worried about data corruption.
There does not seem to be any very clear conclusions on Freenas forums. Some suggest hardware issues (but I’m reasonably spec’d), others loading (but the report was at 3am when under load use) and finally MTU settings making them consistent accross the servers or reducing to 1500 MTU (my settings are consistent accross Freenas & Exsi as per your post (9000MTU)).
Have you any experience with this or pointers as to how to unpick this issue?
Thanks!
Hi James,
I haven’t personally seen this. All I could suggest is running an older FreeNAS version that doesn’t have the issue temporarily.
Best of luck fixing your issue!
John
Thank you John Keen, That was a great tutorial and well organized.
YOU ROCK !!
Thank you, I am glad you like it!
Thanks John for sharing your knowledge. For the Free NAS box, did you build your own box?
Hi Roger,
Yes I built my own box. It was a Haswell Pentium. It worked flawless but is retired now.
Thank you so much for this. I’m having major issues using an NFS41 DS. Going to try this out to see if I have the same issues with iSCSI. Copying over the VMs now. Appreciate that you made this.
Good luck with the switch to iSCSI. Let me know how it goes!
Great article. Youve outlined everything extremely well. Thanks for your contribution!
Thank you very much!
Nicely written. I think the current FeeNAS 11.2 interface is slightly different. I found this guide about settings up iSCSI disks..
https://www.sysprobs.com/nas-vmware-workstation-iscsi-target
But combining it with your ESXi storage setup is really helpful.
Thank you for the kind words, an update could be in order!
Brilliant guide!
My setup runs on an old Dell T20 with 4 even older Samsung 5400 RPM HDDs which form a 5.1 TB ZVOL. The drives are connected with an also old Dell 6Gbps SAS HBA adapter with passthrough. The virtualization of FreeNAS happens on ESXi 6.7 with 2 iSCSI connections between both to avoid ESXi complaining and tagging the connection as degraded. Everything is virtualized and runs on one box.
The performance of VMs with this setup is around 450 MBps and spikes at 490 MBps. That’s impressive for 8+ years old hardware in a home lab.
Thanks for writing this very helpful article!
That sounds like a great little lab setup! Very good performance!
Thank you for the kind words!
Just getting started with my home lab and curious what quad port network cards did you use? I will be ordering some NICs soon but want to make sure ESX 6.7, FreeNAS etc are all compatible. Intel, Broadcom, HP?
Hi,
I have intel and broadcom NICs. I find intels always work 100%. Broadcom nics have good iscsi offloading but only work on about 1/2 my motherboards.