Quantcast
Viewing all 4189 articles
Browse latest View live

vmnic0 link down after enable link aggregation

Hello everybody.

I just connected a new host (HP DL360G9, ESXi 6.5 build 5224529, 4xGigabitEthernet) with new Cisco SG500 Switch and trying to configure link aggregation, but something go wrong.

 

The problem is - no matter what i do, after configuring link aggregation on Cisco (with or without LACP) the connection on vmnic0 is go down, physical link is down too.

I tried to configure almost everything on switch side and ESXi - no luck. Load balance policy ("Ip hash" and "port ID"), reset Cisco config, even vDS with LACP - vmnic0 with management ip going down after ports is aggregated (connection with this IP is still available!).

 

I tried to team only vmnic1, vmnic2 and vmnic3 - success.

vmnic2 and vmnic3 - success.

vmnic0, vmnic1, vmnic2 and vmnic3 - no good.

vmnic0 and vmnic1 - no good.

 

I tried to reboot ESXi - physical link is UP, but only until ntg3 (network card) driver is loaded.

I tried to reset ESXi configuration and reboot - all the same, even with default config first link is go down (connection to management IP is NOT available in this case).

 

What i do wrong?


Re: vmnic0 link down after enable link aggregation

Re: vmnic0 link down after enable link aggregation

Yes, i have read already this KBs.

 

Finally i found the source of problem - it`s a new ntg3 driver. After I do

esxcfg-module -e tg3

esxcfg-module -d ntg3

reboot

everything is good - no more failed uplinks.

 

Developers of ntg3 should see this post. Am i right, chnb?

Re: DR Testing - Multi-Host Bubble Networks

Thanks very much Bayu. Appreciate the information.

Can you move ESXi Hosts with distributed switches from one vCenter to another?

Hi,

 

Can you move ESXi Hosts with distributed switches from one vCenter to another? I know you couldn't under previous versions of ESXi but I assume that has changed now.

 

We do it all the time with hosts that use the standard switches so that is not an issue.

Re: Can you move ESXi Hosts with distributed switches from one vCenter to another?

Re: Can you move ESXi Hosts with distributed switches from one vCenter to another?

dvs's are a pain to support compared to the standard vswitch!! Unless you have an extremely large environment to support I see no benefit in them.

Re: Can you move ESXi Hosts with distributed switches from one vCenter to another?

there are also some features that only available in vDS such as Network I/O Control, Load Based Teaming/Route based on Physical NIC Load, LACP, etc

 

some product/software integration such as VMware NSX or Cisco ACI VMM also require vDS


Qlogic QLE8262 not presenting as storage adapter after 6u3 upgrade.

I was recently asked to assist with an in place upgrade of our ESXi5.5u2 hosts 6u3. The hosts are Dell r720s. The ISO was VMware-VMvisor-Installer-6.0.0.update03-5050593.x86_64-Dell_Customized-A00. The upgrade appeared to run smoothly, host came up, registered in vCenter, and was manageable on the network. However, I soon realized that there was no storage available, and no connection to the SAN. We link to the SAN using an FCoE connection from a QLogic 8262 installed on the host. The FCoE ports appear to be enabled at the BIOS, and I can scan the ports over FCoE at the BIOS with the SAN coming back OK. At the moment I am working from the assumption that the physical card and connections are OK to the SAN because of this (possibly a bad assumption). Also, when managing the host from vSphere, the card is available as a NIC adapter and returns the correct model name for the adapter. However, the card will not present to the host as a storage adapter.

 

My immediate assumption is that there is a missing, corrupt, or incorrect ESXi driver for the FCoE functionality of the CNA, but this gets a bit outside of my scope to troubleshoot as I'm relatively new to working with ESXi and these hosts. Documentation suggested that the upgrade process retained existing drivers, which it looks like it did. It also appeared to upgrade drivers as the vibs were current for the card and time-stamped from the time of the upgrade.

 

So, my issue is that I cannot get a QLogic QLE8262 to present to a 6u3 host as a storage adapter after upgrade. My gut still tells me this is a driver issue, but there are multiple vibs installed for this hardware and I do not know which may be missing for this functionality.

 

Vib list:

Image may be NSFW.
Clik here to view.

 

Ports seen as NIC adapter:

 

Image may be NSFW.
Clik here to view.

 

Not present as Storage adapter:

 

Image may be NSFW.
Clik here to view.

 

Compatibility and referenced driver repository for card from here:

VMware Compatibility Guide - I/O Device Search

 

Apologies in advance for missing or poorly communicated information! And thanks in advance for any assist on this from more experienced admins!

 

EDIT:

So after digging around a bit more, it looks like the host was using a VMware driver for FC, not a QLogic driver? The installed qlnativefc driver on the host is 2.1.50.0-1vmw.600.3.57.5050593 from the Dell cusom upgrade ISO. The Dell Compatibility Guide only lists up to qlnativefc version 2.1.43.0-1 for fw version 4.18.xx (Of course, the QLogic site doesn't list ESXi 6.0 at all for compatibility. Dude! Dell, VMware, and QLogic needs to chat!) I could try rolling back to 2.1.43 and see what happens I guess? Still not sure if there might be other drivers for FCoE though. A driver lpfc looks related as well...

using Standard vSwitchs, vLan, vMotion and HA

Hello,

 

I m trying to understand the practical use of Standard vSwitchs and how I can get to create vLAN and still be able to perform vMotion and HA.

 

I am using vSphere Enterprise 6, with 2 hosts linked to a common datastore, vCenter installed.

I have created three Standard vSwitchs :

- VM Network along with Management network (VMkernel), with two associated vmnic (0 and 1)

- VM Network VLAN2 with vmnic 2

- VM Network VLAN22 with vmnic 3

 

I have five VMs. If I put them with VM Network, I can perform vMotion and High Availability between both hosts.

If I put my VMs with VLAN2 or VLAN22, I can no longer perform vMotion and HA. Adding VMkernel2 and VMkernel22 didn't help, I'm still stuck.

 

How could I manage to get vMotion and HA from my VLAN2 and VLAN22?

 

Many thanks if anyone could give me tips or some documentation about this. 

Re: using Standard vSwitchs, vLan, vMotion and HA

once again the mistake was between the chair and the desk.

One should keep simple and stay with short names and moreover IDENTICAL names between hosts. Stupid space included in one host and not the other.

Traffic Flow in Nested VM

Hello

 

Can anyone plz explain how does a packet flow in a Nested Cloud Environment?

 

Thanks

Re: Qlogic QLE8262 not presenting as storage adapter after 6u3 upgrade.

So, this turned out to be a bad driver from the upgrade. I had tickets open with Dell, QLogic, and VMware. Dell claimed the card was good at the hardware level and so we should talk to QLogic. QLogic said, nope, that's an OEM card, Dell has their own validation process for drivers with it, talk to Dell. VMware called durign lunch. Twice. So, looking at likely having to wipe the host and start again, I figured it wouldn't hurt to start playing with the drivers some.

 

The Dell custom 6u3 ISO installed:

qlnativefc 2.1.50.0-1vmw.600.3.57.5050593  VMware VMwareCertified   2017-04-19

 

What worked was:

qlnativefc 2.1.43.0-1OEM.600.0.0.2768847   QLogic  VMwareCertified   2017-04-25

 

I guess 2.1.50 just wasn't quite ready for showtime for our sku.

Anyhow, replacing this driver resolved the issue for us and the cna immediately came up as a storage adapter.

Cheers.

Unable to vMotion between hosts

Hi there!

I'm trying to vMotion VMs between an old cluster (vsphere 5.5 / dvSwitchs 5.5) to another one (vsphere 6.5 / dvswitch 6.5).

I'm using vCSA 6.5 Flash mode.

 

It gives me the errors:

 

"Change the network and host of a virtual machine during vmotion is not supported on the "source" host"

 

"Current connected network interface "network adapter 1" cannot use network XXXXXXXXX becuase the destination vDS has a different version and or vendor than the source.."

 

Source vDS = Version 5.5 | ESXi host version 5.5

Destination vDS = 6.5| ESXi host version 6.5

 

I already have changed the vendor to VMware, Inc. with this KB in mind: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2126851

 

Any clues?

 

Best regards

“Currently connected network interface ‘Network adapter 1’ uses network VMnetwork2, which is not accessible.”

I am new in VMware.  I have 2 Esxi Server added in HA cluster. There are two virtual machines one is XPVM another is XPVM2 both are in Esxi1. High availability is working fine when both XPVMs are connected to vSwitch0. If both XPVMs are connected to vSwitch 0 then HA is working fine means which I shut down the ESXi1 then both XPVMs are smoothly moving to Esxi2 Server. But problem is when I connect any VM to another vSwitch named vSwitch1 which I added latter to Esxi1 then the particular VM which is connected to vSwicth1 is neither move to Esxi2 or can I move it through vmotion. Here XPVM2 is connected to vSwitch1. Esxi2 Server also having same networking configuration. I am getting an error while trying to change host of XPVM2 i.e. “Currently connected network interface ‘Network adapter 1’ uses network VMnetwork2, which is not accessible.” I am unable to understand the problem neither able to resolve the issue. Please guide. Whereas all Ips are pinging from both VM including Kernel Ips are also pinging.


Re: “Currently connected network interface ‘Network adapter 1’ uses network VMnetwork2, which is not accessible.”

The XPVM works because it is connected to "VM Network" port group which is present on ESXi1 and ESXi2, and XPVM2 fails because it is connected to "VM Network2" port group which is present only on ESXi1 and NOT on ESXi2. So, create the port group "VM Network2" on host ESXi2 and it should fix your problem.

Re: “Currently connected network interface ‘Network adapter 1’ uses network VMnetwork2, which is not accessible.”

Thank you very much. I had sleep lees night before your reply. I create a same port group in both Esxi Servers named "VM Network5" as VM Network2 was creating confusion in my mind. After following your instruction the problem resolved smoothly. Thank you very much Sir again. I hope you will guide me again when needed. 

Migrate an Existing Virtual Adapter between vSphere Distributed Switches

We are in the process of Migrating our 5.5 esxi hosts and vCenter 5.5 from 1Gb Nics to 10Gb Nics.

 

We currently have Two Distributed Switches, one which is has two 1Gb Nics which has a MGT port group which has a virtual adapter for management traffic. The other distributed Switches has two 10Gb Nics in them, with a MGT port group and port group for each vlans. We have already migrated the VM traffic over to the 10Gig. I just need to migrate the virtual adapter for management traffic over to 10Gb Nics.

 

All the documentation states you can migrate virtual adapters between standard switch and Distributed Switch and vice versus, but do you know if it is supported to migrate virtual adapters between Distributed Switches. I cant seem to find anything to confirm this on the Documentation Center.

 

In my virtual test lab i have managed to do this, by clicking on the Distributed Switch which you want to migrate the virtual adapters to and got to manage virtual adapters. You can click on add and you can chose a option to migrate a existing virtual adapters. From here you can migrate the virtual adapter. This work in my test lab, but i wonder if anyone has tried this in a live environment on physical hardware and if this is support by VMware?

Re: network portgroup problem

Probably a bit late to help you, but I ran into the same problem after adding a VMKernel NIC to the portgroup in question.

 

Despite the message, the VM that was apparently 'broken' was still able to use the network interface through that portgroup. I shut the VM down to increase its RAM, but noticed the web interface didn't report the increased size either. After I removed the VMKernel NIC from the group, the correct RAM size was reported, although as far as the GUI was concerned I needed to re-associate the portgroup with the Virtual NIC.

 

This type of bug doesn't seem very unusual from the javascript GUI - it's not the first time I've seen the properties of a VM get mangled and it almost always crashes and has to be reloaded when I power-on a VM.

 

For infrastructure that I'm relying on, this kind of thing makes me very nervous.

Can not Ping Secondary Ip in VM Created By ESXI 5.1

Hi All,

 

Maybe my question is so easy but I don't know how to do because I'm newbie at ESXI. I'm using ESXI 5.1 I have two different ip in two different block(192.168.2.48 & 10.122.127.203). I have defined 10.122.127.203 as manually. This ip's gateway is 10.122.127.193.  After that, I have defined route to my manuel ip in my virtual machine.

 

Destination         Gateway                Genmask                 IFace

10.122.127.0      10.122.127.193     255.255.255.192     eth1

10.122.127.192   *                            255.255.255.192     eth1

 

and added route to file system.

 

After that processes I have started to configure esxi networking. I have created new virtual switch and connect physical adapter to this switch (vSwitch2). I have two virtual machine port group. First one is VM Network and second one is VM Network 2. In VM's edit settings section, I have choose VM Network 2 for second network adapter. Now my machine is connected to this physical adapter.

 

But I can not ping to second IP.

 

Do I need to add VMkernel Port to second virtual switch and after that do I need to define route to this VMkernel port. Actually I did this but still not working. I'm waiting for your help.

 

Best Regards,

Murat.

Viewing all 4189 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>