@FCOETech I have the exact same issue. How did you roll back to that driver?
Re: Qlogic QLE8262 not presenting as storage adapter after 6u3 upgrade.
Re: "Cannot open the disk '..' or one of the snapshot disks it depends on. 22 (Invalid argument)." - But no Lock-files!
Hello,
Depending on the storage, there may not be a lock file. This KB has more information on looking for locks:Investigating virtual machine file locks on ESXi (10051) | VMware KB
The vmkfstools -D command is the most useful for determining the source MAC address.
Let me know if this helps.
Isolate subneted VMs
Hi, I m trying to find some information on how to isolate guests in a multi customer environment in EXSi, google did not help maybe I am using wrong terminology in search . Or point m e in to some resource that I can read
Thanks and Best Regards
Re: Isolate subneted VMs
You will want to look into using NSX or you maybe able to get away using private vlans.
Private VLAN (PVLAN) on vNetwork Distributed Switch - Concept Overview (1010691) | VMware KB
Re: Isolate subneted VMs
HI Robbieozer, thank you for the quick reply and I will try your suggestions!
Best Regards
Re: Vsphere VST question
Hi yes my firewall called HOME in the pic is attached to portgroup belonging to VLAN 20. What i think is traffic exiting Port 4 on the physical switch does not seem to reach Port 2 on the physical switch. I have made sure port 4 and 2 are in trunk mode to allow vlan 20 to outside.
also i have a route for 192.168.0.0 from 192.168.0.12 which is my firewall wan interface
Re: promiscuous mode causes 100% packet loss on dvportgroup or packet flooding
This could be caused by a network loop. Need to run a Wireshark capture to understand the network traffic flow.
Re: internal only portGroup on vDS
Hi Rodrigo, in that case you create a new VLAN for that isolated VLAN and create the PortGroup for that isolated VLAN.
Do not create a gateway/interface VLAN/SVI on the physical switch/router.
If you need multiple isolated VLAN and they need to be able to reach each other then you can either use VRF (Virtual routing and forwarding) or use Private VLAN (Private VLAN)
Virtual Machines on one VLAN looses connectivity after vmotion to one host in a cluster but works on all other hosts in a cluster
I have a 8 node cluster . The issue is what i am facing is we have 6 Vms on one particular VLAN 103 and if i migrate any of the Vms in VLAN 103 to this particular host ESXI - 06 , the VM looses network connectivity.
It is not reachable and even after a vmconsole if i try to ping the gateway it shows request time out. If i go to the networking tab in Monitor for this particular HOST ESXI - 06 - physical N/w adaptors i can see in the observed IP range the VLAN 103. Can any help me on how to actually start with this issue.
If the VMs for VLAN 103 is migrated to any of the other hosts in the cluster , it just works perfectly.
Using Standard Switch config.
Re: Virtual Machines on one VLAN looses connectivity after vmotion to one host in a cluster but works on all other hosts in a cluster
Since you're using Standard switch, can you confirm if the VLAN 103 is assigned to the correct port group on the destination faulty host? And can you confirm if the physical switch port where hosts are connected shared the same exact VLAN configuration?
Re: Isolate subneted VMs
HI Robbieozer, I am not a vmware advance user just learning, appreciate your advice very much. Will it helps in any way to use vCloud?
Best Regards
Re: Isolate subneted VMs
Sorry Robbieozer, NSX is part of vCloud
Re: Virtual Machines on one VLAN looses connectivity after vmotion to one host in a cluster but works on all other hosts in a cluster
vlan 103 is assigned to the correct port group just like any other host. and yes the physical switch port where hosts are connected shares the same exact VLAN configuration.
I tried to run some ping test for my VM ( 10.244.103.181) . VLAN 103 ip range is 10.244.103.1 - 10.244.103.255.
If the VM is on a different host , it will ping the default gateway to the VLAN , it will ping any other subnet too .
IF the VM is migrated to this faulty host , it looses the ping to any other subnets , to the gateway , everything.
for example -
VM for VLAN 103 is residing on a different Host
C:\Users\Administrator>ping 10.244.103.1
Pinging 10.244.103.1 with 32 bytes of data:
Reply from 10.244.103.1: bytes=32 time=1ms TTL=255
Reply from 10.244.103.1: bytes=32 time<1ms TTL=255
Reply from 10.244.103.1: bytes=32 time=5ms TTL=255
Reply from 10.244.103.1: bytes=32 time=1ms TTL=255
Ping statistics for 10.244.103.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 5ms, Average = 1ms
C:\Users\Administrator>ping 10.244.104.1
Pinging 10.244.104.1 with 32 bytes of data:
Reply from 10.244.104.1: bytes=32 time<1ms TTL=255
Reply from 10.244.104.1: bytes=32 time=1ms TTL=255
Reply from 10.244.104.1: bytes=32 time<1ms TTL=255
Reply from 10.244.104.1: bytes=32 time=1ms TTL=255
Ping statistics for 10.244.104.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 1ms, Average = 0ms
C:\Users\Administrator>ping 10.244.104.74
Pinging 10.244.104.74 with 32 bytes of data:
Reply from 10.244.104.74: bytes=32 time<1ms TTL=127
Reply from 10.244.104.74: bytes=32 time<1ms TTL=127
Reply from 10.244.104.74: bytes=32 time<1ms TTL=127
Reply from 10.244.104.74: bytes=32 time<1ms TTL=127
C:\Users\Administrator>tracert 10.244.226.74
Tracing route to 10.244.226.74 over a maximum of 30 hops
1 2 ms <1 ms <1 ms [10.244.103.3]
2 7 ms <1 ms <1 ms [10.244.246.9]
3 <1 ms <1 ms <1 ms [10.244.246.26]
4 <1 ms <1 ms <1 ms 10.244.226.74
VM migrated to host Vsphre-NA2-06
C:\Users\Administrator>ping 10.244.104.1
Pinging 10.244.104.1 with 32 bytes of data:
Reply from 10.244.103.181: Destination host unreachable.
Request timed out.
Reply from 10.244.103.181: Destination host unreachable.
Reply from 10.244.103.181: Destination host unreachable.
Ping statistics for 10.244.104.1:
Packets: Sent = 4, Received = 3, Lost = 1 (25% loss),
C:\Users\Administrator>ping 10.244.103.1
Pinging 10.244.103.1 with 32 bytes of data:
Request timed out.
Reply from 10.244.103.181: Destination host unreachable.
Request timed out.
Reply from 10.244.103.181: Destination host unreachable.
Ping statistics for 10.244.103.1:
Packets: Sent = 4, Received = 2, Lost = 2 (50% loss),
C:\Users\Administrator>tracerT 10.244.103.1
Tracing route to 10.244.103.1 over a maximum of 30 hops
1 VM.XYZ.com [10.244.103.181] reports: Destination host unreachable.
Trace complete.
C:\Users\Administrator>PING 10.244.226.74
Pinging 10.244.226.74 with 32 bytes of data:
Reply from 10.244.103.181: Destination host unreachable.
Reply from 10.244.103.181: Destination host unreachable.
Request timed out.
Reply from 10.244.103.181: Destination host unreachable.
Ping statistics for 10.244.226.74:
Packets: Sent = 4, Received = 3, Lost = 1 (25% loss),
C:\Users\Administrator>TRACERT 10.244.226.74
Tracing route to 10.244.226.74 over a maximum of 30 hops
1 VM.XYZ.com [10.244.103.181] reports: Destination host unreachabLE
- le.
Trace complete.
Restarted the M/G n/w,
Restarted the M/G agents.
Rebooted the machine.
Is there any specific log file i can check here.
Re: Low virtual network adapter performance [vmxnet3]. ESXi 5.5U3.
check if information in KB 2058349 helps
Re: Virtual Machines on one VLAN looses connectivity after vmotion to one host in a cluster but works on all other hosts in a cluster
This one is resolved now.
We are using 2 NICs per Vswitch for redundancy - and when i went to settings i believe the VMNIC to which all VLANs are accessible was in standby.
I changed the VLAN settings - NIC Teaming- Made VMNIC2 active and everything was working fine after that.
Port binding: dynamic vs. ephemeral
As I study for my VCP6-DCV I'm trying to get a better understanding of dynamic vs. ephemeral port binding for dvPortGroups. After doing some research (see below) I need to conform some things.
1) Because ephemeral ports act like ports on standard port groups and VMware refers to this as "no binding" what VMware really means is that port binding is in effect delegated to the ESXi hosts.
2) Therefore, the difference between dynamic vs. ephemeral is that in the case of dynamic ports the vdSwitch does the actual port binding (at VM power-on), but in the case of ephemeral ports the host is doing the port binding.
3) Does this mean that ephemeral ports don't count against the "Ports per distributed switch" and "Distributed virtual network switch ports per vCenter" configuration maximums?
[1] vNetwork Distributed PortGroup (dvPortGroup) configuration (1010593) (http://kb.vmware.com/kb/1010593)
[2] Configuring vNetwork Distributed Switch for VMware View (http://myvirtualcloud.net/configuring-vnetwork-distributed-switch-for-vmware-view/)
[3] Static, Dynamic and Ephemeral Binding in Distributed Switches (http://www.vmskills.com/2010/10/static-dynamic-and-ephemeral-binding-in.html)
[4] ESXi/ESX Configuration Maximums (1003497) (http://kb.vmware.com/kb/1003497)
Re: Port binding: dynamic vs. ephemeral
For first and second statement, Yes. Please refer to below KB.
3) No, it does count against limit with VDS and vCenter since adding multiple ephemeral port push toward vCenter maximums. This limit is 1016 ports. For vSphere 4.x, 5.x limits you have link below and for vSphere 6.x please refer to https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf
Ephemeral binding
In a port group configured with ephemeral binding, a port is created and assigned to a virtual machine by the host when the virtual machine is powered on and its NIC is in a connected state. When the virtual machine powers off or the NIC of the virtual machine is disconnected, the port is deleted.
You can assign a virtual machine to a distributed port group with ephemeral port binding on ESX/ESXi and vCenter, giving you the flexibility to manage virtual machine connections through the host when vCenter is down. Although only ephemeral binding allows you to modify virtual machine network connections when vCenter is down, network traffic is unaffected by vCenter failure regardless of port binding type.
Note: Ephemeral port groups must be used only for recovery purposes when you want to provision ports directly on host bypassing vCenter Server, not for any other case. This is true for several reasons:
- Scalability
An ESX/ESXi 4.x host can support up to 1016 ephemeral port groups and an ESXi 5.x host can support up to 256 ephemeral port groups. Since ephemeral port groups are always pushed to hosts, this effectively is also the vCenter Server limit. For more information, see Configuration Maximums for VMware vSphere 5.0 and Configuration Maximums for VMware vSphere 4.1. - Performance
Every operation, including add-host and virtual machine power operation, is slower comparatively because ports are created/destroyed in the operation code path. Virtual machine operations are far more frequent than add-host or switch-operations, so ephemeral ports are more demanding in general. - Non-persistent (that is, "ephemeral") ports
Port-level permissions and controls are lost across power cycles, so no historical context is saved.
Re: Port binding: dynamic vs. ephemeral
3) No, it does count against limit with VDS and vCenter since adding multiple ephemeral port push toward vCenter maximums. This limit is 1016 ports. For vSphere 4.x, 5.x limits you have link below and for vSphere 6.x please refer to https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf
I understand this (Maximum active ports per host (VDS and VSS) = 1016) differently. This is a per host limit, not a vCenter/DVS limit. Precisely because the DVS is not managing the port binding (because it has been delegated to the host) ephemeral ports do not count against either the "Ports per distributed switch" or "Distributed virtual network switch ports per vCenter" configuration maximums. This goes back to point #1 about VMware referring to ephemeral ports as "no binding" because the DVS is not tracking the binding.
In a hypothetical situation if I added enough hosts to vCenter with ephemeral port groups and bind them to 1016 vNICs per host I could exceed the vCenter/VDS limit (60k in the case of vSphere 6.0).
Yes/No/Maybe?
creating new vmkernel port breaks management network connectivity
cisco ucs 220 v4 server
esxi 5.5 custom image for cisco
2 physical 2 vmnics
currently have 1 vswitch. managment network(vmnic0) is configured on this vswitch. trying to setup vmkernel port for iscsi traffic using the other nic/vmnic1. i am able to click through and configure from the 'add network wizard' and when i click finish, vsphere spins and spins. after a minute it is no longer able to communicate over management port. i have to log into the console of esxi and restore network settings and then re ip the management network to get reconnected.
what am i doing wrong?
ESXi networking best practice
Hi everyone, recently i had to add three VM's to our ESXi and I'm trying to build this the best possible way in regard to load balancing.
Initial configuration was same as on the screenshot except those 3 newly added VM's. So there were vSwitch0 - with Management network only and vSwitch1 - with actual VM's (hosts), and instead of adding those three VM's to vSwitch1 (where all other hosts are residing) i thought that i better add them to vSwitch0 to make load balancing "better".
There are total two physical network cards connected to the server, one for vSwitch0 and another for vSwitch1.
Is it a good idea guys to have VM's (hosts) on the same network card which is used to manage ESXi (management network)?
The thing is i inherited all of this (as most of us) and dont want to make it worse then it was and in the same time to build new according to the best practices.Thanks.