I just connected a new host (HP DL360G9, ESXi 6.5 build 5224529, 4xGigabitEthernet) with new Cisco SG500 Switch and trying to configure link aggregation, but something go wrong.
The problem is - no matter what i do, after configuring link aggregation on Cisco (with or without LACP) the connection on vmnic0 is go down, physical link is down too.
I tried to configure almost everything on switch side and ESXi - no luck. Load balance policy ("Ip hash" and "port ID"), reset Cisco config, even vDS with LACP - vmnic0 with management ip going down after ports is aggregated (connection with this IP is still available!).
I tried to team only vmnic1, vmnic2 and vmnic3 - success.
vmnic2 and vmnic3 - success.
vmnic0, vmnic1, vmnic2 and vmnic3 - no good.
vmnic0 and vmnic1 - no good.
I tried to reboot ESXi - physical link is UP, but only until ntg3 (network card) driver is loaded.
I tried to reset ESXi configuration and reboot - all the same, even with default config first link is go down (connection to management IP is NOT available in this case).
Can you move ESXi Hosts with distributed switches from one vCenter to another? I know you couldn't under previous versions of ESXi but I assume that has changed now.
We do it all the time with hosts that use the standard switches so that is not an issue.
You can export vDS and import to new vCenter with preserved identifiers but I believe this is not supported and there are some caveats as explained in this blog:
I was recently asked to assist with an in place upgrade of our ESXi5.5u2 hosts 6u3. The hosts are Dell r720s. The ISO was VMware-VMvisor-Installer-6.0.0.update03-5050593.x86_64-Dell_Customized-A00. The upgrade appeared to run smoothly, host came up, registered in vCenter, and was manageable on the network. However, I soon realized that there was no storage available, and no connection to the SAN. We link to the SAN using an FCoE connection from a QLogic 8262 installed on the host. The FCoE ports appear to be enabled at the BIOS, and I can scan the ports over FCoE at the BIOS with the SAN coming back OK. At the moment I am working from the assumption that the physical card and connections are OK to the SAN because of this (possibly a bad assumption). Also, when managing the host from vSphere, the card is available as a NIC adapter and returns the correct model name for the adapter. However, the card will not present to the host as a storage adapter.
My immediate assumption is that there is a missing, corrupt, or incorrect ESXi driver for the FCoE functionality of the CNA, but this gets a bit outside of my scope to troubleshoot as I'm relatively new to working with ESXi and these hosts. Documentation suggested that the upgrade process retained existing drivers, which it looks like it did. It also appeared to upgrade drivers as the vibs were current for the card and time-stamped from the time of the upgrade.
So, my issue is that I cannot get a QLogic QLE8262 to present to a 6u3 host as a storage adapter after upgrade. My gut still tells me this is a driver issue, but there are multiple vibs installed for this hardware and I do not know which may be missing for this functionality.
Apologies in advance for missing or poorly communicated information! And thanks in advance for any assist on this from more experienced admins!
EDIT:
So after digging around a bit more, it looks like the host was using a VMware driver for FC, not a QLogic driver? The installed qlnativefc driver on the host is 2.1.50.0-1vmw.600.3.57.5050593 from the Dell cusom upgrade ISO. The Dell Compatibility Guide only lists up to qlnativefc version 2.1.43.0-1 for fw version 4.18.xx (Of course, the QLogic site doesn't list ESXi 6.0 at all for compatibility. Dude! Dell, VMware, and QLogic needs to chat!) I could try rolling back to 2.1.43 and see what happens I guess? Still not sure if there might be other drivers for FCoE though. A driver lpfc looks related as well...
So, this turned out to be a bad driver from the upgrade. I had tickets open with Dell, QLogic, and VMware. Dell claimed the card was good at the hardware level and so we should talk to QLogic. QLogic said, nope, that's an OEM card, Dell has their own validation process for drivers with it, talk to Dell. VMware called durign lunch. Twice. So, looking at likely having to wipe the host and start again, I figured it wouldn't hurt to start playing with the drivers some.
I'm trying to vMotion VMs between an old cluster (vsphere 5.5 / dvSwitchs 5.5) to another one (vsphere 6.5 / dvswitch 6.5).
I'm using vCSA 6.5 Flash mode.
It gives me the errors:
"Change the network and host of a virtual machine during vmotion is not supported on the "source" host"
"Current connected network interface "network adapter 1" cannot use network XXXXXXXXX becuase the destination vDS has a different version and or vendor than the source.."
I am new in VMware. I have 2 Esxi Server added in HA cluster. There are two virtual machines one is XPVM another is XPVM2 both are in Esxi1. High availability is working fine when both XPVMs are connected to vSwitch0. If both XPVMs are connected to vSwitch 0 then HA is working fine means which I shut down the ESXi1 then both XPVMs are smoothly moving to Esxi2 Server. But problem is when I connect any VM to another vSwitch named vSwitch1 which I added latter to Esxi1 then the particular VM which is connected to vSwicth1 is neither move to Esxi2 or can I move it through vmotion. Here XPVM2 is connected to vSwitch1. Esxi2 Server also having same networking configuration. I am getting an error while trying to change host of XPVM2 i.e. “Currently connected network interface ‘Network adapter 1’ uses network VMnetwork2, which is not accessible.” I am unable to understand the problem neither able to resolve the issue. Please guide. Whereas all Ips are pinging from both VM including Kernel Ips are also pinging.
The XPVM works because it is connected to "VM Network" port group which is present on ESXi1 and ESXi2, and XPVM2 fails because it is connected to "VM Network2" port group which is present only on ESXi1 and NOT on ESXi2. So, create the port group "VM Network2" on host ESXi2 and it should fix your problem.
Thank you very much. I had sleep lees night before your reply. I create a same port group in both Esxi Servers named "VM Network5" as VM Network2 was creating confusion in my mind. After following your instruction the problem resolved smoothly. Thank you very much Sir again. I hope you will guide me again when needed.
We are in the process of Migrating our 5.5 esxi hosts and vCenter 5.5 from 1Gb Nics to 10Gb Nics.
We currently have Two Distributed Switches, one which is has two 1Gb Nics which has a MGT port group which has a virtual adapter for management traffic. The other distributed Switches has two 10Gb Nics in them, with a MGT port group and port group for each vlans. We have already migrated the VM traffic over to the 10Gig. I just need to migrate the virtual adapter for management traffic over to 10Gb Nics.
All the documentation states you can migrate virtual adapters between standard switch and Distributed Switch and vice versus, but do you know if it is supported to migrate virtual adapters between Distributed Switches. I cant seem to find anything to confirm this on the Documentation Center.
In my virtual test lab i have managed to do this, by clicking on the Distributed Switch which you want to migrate the virtual adapters to and got to manage virtual adapters. You can click on add and you can chose a option to migrate a existing virtual adapters. From here you can migrate the virtual adapter. This work in my test lab, but i wonder if anyone has tried this in a live environment on physical hardware and if this is support by VMware?
Probably a bit late to help you, but I ran into the same problem after adding a VMKernel NIC to the portgroup in question.
Despite the message, the VM that was apparently 'broken' was still able to use the network interface through that portgroup. I shut the VM down to increase its RAM, but noticed the web interface didn't report the increased size either. After I removed the VMKernel NIC from the group, the correct RAM size was reported, although as far as the GUI was concerned I needed to re-associate the portgroup with the Virtual NIC.
This type of bug doesn't seem very unusual from the javascript GUI - it's not the first time I've seen the properties of a VM get mangled and it almost always crashes and has to be reloaded when I power-on a VM.
For infrastructure that I'm relying on, this kind of thing makes me very nervous.
Maybe my question is so easy but I don't know how to do because I'm newbie at ESXI. I'm using ESXI 5.1 I have two different ip in two different block(192.168.2.48 & 10.122.127.203). I have defined 10.122.127.203 as manually. This ip's gateway is 10.122.127.193. After that, I have defined route to my manuel ip in my virtual machine.
Destination Gateway Genmask IFace
10.122.127.0 10.122.127.193 255.255.255.192 eth1
10.122.127.192 * 255.255.255.192 eth1
and added route to file system.
After that processes I have started to configure esxi networking. I have created new virtual switch and connect physical adapter to this switch (vSwitch2). I have two virtual machine port group. First one is VM Network and second one is VM Network 2. In VM's edit settings section, I have choose VM Network 2 for second network adapter. Now my machine is connected to this physical adapter.
But I can not ping to second IP.
Do I need to add VMkernel Port to second virtual switch and after that do I need to define route to this VMkernel port. Actually I did this but still not working. I'm waiting for your help.