Quantcast
Channel: VMware Communities: Message List - vSphere™ vNetwork
Viewing all 4189 articles
Browse latest View live

Re: ESXi networking best practice

$
0
0

Assuming that both NICs are connected to the same physical switch/network, I'd actually recommend that you reduce the virtual network to a single vSwitch with two uplinks (both active). This way the VMs will be distributed across the uplinks, and you will also be protected from a sinle uplink failure.

 

André


non-IP hash mismatch - distributed switch 6.5

$
0
0

Hi,

 

I setup a distributed switch v6.5 when I activate the health check I get a warning for teaming and failover.

 

dvs.jpg

The distributed port groups are all set to: Route based on physical NIC load. And if I change these this does not affect the warning message.

 

The ESXi is connected via two 10GB NICs to two different physical switches. (Because of some reasons there is no LACP configured.)

The management of the ESXi runs over two 1GB NICs on a vSwitch.

 

Why is there this warning and what may I do? I already did a lot of tests and changes on the VMware and on the hardware side.

 

Regards Wolfgang

Unable to ping vm on LACP (vSphere 6.0)

$
0
0

I have configured LACP on one of my hosts connected to a Cisco SG300-20 (firmware 1.4.1.3) in L3 mode. Everything on both sides looks good. However, the virtual machine is not able to ping the default gateway on the SG300 nor is the SG300 able to ping the IP of the virtual machine. I know the IP configuration of the virtual machine is good because when I move it to a standard port group on the same VLAN all is good.

 

From vCenter 6.0.0 Build 3617395

 

LAG as seen in dvSwitch topology.

dvSwitch_toplogy.PNG

LAG ports on dvSwitch

lag1_ports_in_dvSwitch.PNG

Virtual machine ports on dvSwitch

RHEL01_ports_on_dvSwitch.PNG

 

On ESXi 6.0.0 Update 3 (build-5050593)

 

[root@VAPELHhost01:~] esxcli network vswitch dvs vmware list
VA-DvSwitch_for_Everything   Name: VA-DvSwitch_for_Everything   VDS ID: f1 ef 2b 50 9f f3 c2 d7-f1 81 53 04 b7 db 19 86   Class: etherswitch   Num Ports: 4352   Used Ports: 8   Configured Ports: 512   MTU: 1500   CDP Status: both   Beacon Timeout: -1   Uplinks: vmnic3, vmnic2   VMware Branded: true   DVPort:         Client:         DVPortgroup ID: dvportgroup-242         In Use: false         Port ID: 0         Client:         DVPortgroup ID: dvportgroup-242         In Use: false         Port ID: 1         Client: vmnic3         DVPortgroup ID: dvportgroup-242         In Use: true         Port ID: 36         Client: vmnic2         DVPortgroup ID: dvportgroup-242         In Use: true         Port ID: 37         Client: RHEL01.eth0         DVPortgroup ID: dvportgroup-385         In Use: true         Port ID: 28
[root@VAPELHhost01:~] esxcli network vswitch dvs vmware lacp config get
DVS Name                    LAG Name  LAG ID  NICs           Enabled  Mode    Load balance
--------------------------  --------  ------  -------------  -------  ------  --------------
VA-DvSwitch_for_Everything  lag1       23153  vmnic3,vmnic2     true  Active  Src and dst ip
[root@VAPELHhost01:~] esxcli network vswitch dvs vmware lacp stats get
DVSwitch                    LAGID  NIC     Rx Errors  Rx LACPDUs  Tx Errors  Tx LACPDUs
--------------------------  -----  ------  ---------  ----------  ---------  ----------
VA-DvSwitch_for_Everything  23153  vmnic2          0         281          0         280
VA-DvSwitch_for_Everything  23153  vmnic3          0         281          0         280
[root@VAPELHhost01:~] esxcli network vswitch dvs vmware lacp status get
VA-DvSwitch_for_Everything   DVSwitch: VA-DvSwitch_for_Everything   Flags: S - Device is sending Slow LACPDUs, F - Device is sending fast LACPDUs, A - Device is in active mode, P - Device is in passive mode   LAGID: 23153   Mode: Active   Nic List:         Local Information:         Admin Key: 9         Flags: SA         Oper Key: 9         Port Number: 32770         Port Priority: 255         Port State: ACT,AGG,SYN,COL,DIST,         Nic: vmnic2         Partner Information:         Age: 00:00:05         Device ID: 1c:de:a7:31:63:f0         Flags: SA         Oper Key: 1000         Port Number: 56         Port Priority: 1         Port State: ACT,AGG,SYN,COL,DIST,         State: Bundled         Local Information:         Admin Key: 9         Flags: SA         Oper Key: 9         Port Number: 32771         Port Priority: 255         Port State: ACT,AGG,SYN,COL,DIST,         Nic: vmnic3         Partner Information:         Age: 00:00:05         Device ID: 1c:de:a7:31:63:f0         Flags: SA         Oper Key: 1000         Port Number: 58         Port Priority: 1         Port State: ACT,AGG,SYN,COL,DIST,         State: Bundled
[root@VAPELHhost01:~]

 

 

From Cisco SG300-20 (firmware1.4.1.3)

 

LAG Management

Cisco_SG300_Lag_Management.PNG

LAG Settings

Cisco_SG300_Lag_Settings.PNG

LAG Settings detail

Cisco_SG300_Lag_Settings_Detail.PNG

 

I'm not sure what do next. I don't have any other device capable of LACP to test the switch.

Network IO Control question

$
0
0

Hi,

 

I have two cases where I will be using VMK ports for traffic types other than those listed under NIOC's System Traffic Types (like vmotion, management etc.), one will be the VMK port used by ScaleIO and one will be a vmk used to keep NBD backup traffic on the backup VLAN.

I have only 2 10G uplinks per host so need to use NIOC to control the traffic, I cannot see any way to assign shares/reservations to the ScaleIO and Backup traffic types that will be using the VMK ports.  I can only control VM traffic or the system traffic types as listed.  I'd like to assign shares etc. to the VMK ports.

 

Any ideas?

Re: Unable to ping vm on LACP (vSphere 6.0)

$
0
0

I was able to figure out the problem. In the Cisco switch I had the LAN associated with the LAG instead of the individual switch ports. This was a bit counter intuitive as I figured that because vSphere is sending VLAN tagged traffic to the LAG that the VLAN should be associated with the LAG on the switch. Not so much.

clusters sharing VDS

$
0
0

I have esxi & vcenter 5.5 u3, I want to migrate to evc mode and also replace two older hardware hosts. Can I create a second cluster, turn on EVC mode and also use the same VDS? I want to migrate everything to the new cluster and then get rid of cluster 1

Re: clusters sharing VDS

$
0
0

Yes, no problem with sharing vDS, a single vDS can span multiple hosts across multiple clusters and a single host can be attached to multiple vDSs.

Re: clusters sharing VDS

$
0
0

Anything thing on the configuration of the 2nd cluster that I need to know about to make sure it all works and I don't have any gotchas


Re: clusters sharing VDS

Re: non-IP hash mismatch - distributed switch 6.5

Re: non-IP hash mismatch - distributed switch 6.5

$
0
0

Yes I already tried to dis- and enable it.

 

The Link to the other threat I already found before but there is no match to v6.5 only for earlier versions.

 

Regarding my hardware it's a HPE QLogic NC382i/NC532x aka NC523SFP.

 

The mentioned firmware on the VMware side is not available neither via VMware nor via HPE support. I don't think/hope that this is the cause of the problem, because I have the latest available firmware and drivers from HPE.

 

Do you need any more information ?

 

Regards Wolfgang

Re: clusters sharing VDS

$
0
0

my vCenter server on a hardware server so it is running outside of vmware. Part of my issue is that we are not running in EVC mode but using manually configured CPUID mask and the two server I am trying to replace are old. I want to get everything on to EVC mode at the highest level, but in are current configuration vMotion is not working to the new machines. So I was trying to find a way to convert everything over without having to bring it all down at once and test it as well. I have never used EVC mode before not sure what will happen

Move from LAG to LBT on a vDS?

$
0
0

Hi,

I have a couple of hosts and a vDS with two uplinks each in a LAG using Route based on IP Hash, that's working fine.

I've been searching for a best practise way of migrating from LAGs (and Route based on IP Hash) to just plain physical ports (vlan trunks) and Load Based Teaming (phy NIC load).

But to my surprise i could not find a single article or blogpost discussing this procedure. Perhaps it's just that easy and obvious that i should know it already

 

My hosts are on 6.0 and so is my vDS.

 

Kind Regards

Magnus

Re: ingress and egress traffic shaping

$
0
0

Moderator note: Moved to the relevant sub-forum area, VMware vSphere vNetwork.

Re: ingress and egress traffic shaping

$
0
0

Yes, you are correct. In VMware-speak:

 

ingress: Traffic is going into the vDS from the VM.

 

egress: Traffic is going out to the VM from the vDS.

 

"Within a standard vSwitch, you can only enforce traffic shaping on outbound traffic that is being sent out of an object--such as a VM or VMkernel port--toward another object. This is referred to by VMware as "ingress traffic" and refers to the fact that data is coming into the vSwitch by way of the virtual ports. Later, we cover how to set "egress traffic" shaping, which is the control of traffic being received by a port group headed toward a VM or VMkernel port, when we start talking about the distributed switch in the next chapter."

 

Source: Wahl & Pantol. (2014). Networking for VMware Administrators. Palo Alto: VMware Press.

 

 

However, VMware's definition of this stops at the virtual switch. Per the vSphere 6.5 documentation: "The traffic is classified to ingress and egress according to the traffic direction in the switch, not in the host" (http://pubs.vmware.com/vsphere-65/index.jsp#com.vmware.vsphere.networking.doc/GUID-964F5A21-0B53-468A-8A05-B71AA91F8A31.html?).

 

In Cisco-speak:

 

ingress: Traffic moving out of a physical interface.

 

egress: Traffic moving into a physical interface.

 

But this means you are also right, because moving past VMware's definition and into Cisco's it's still the same direction. Therefore, ingress is VM > vSwitch > physical switch, and egress is physical switch > vSwitch > VM.


Re: DR Testing - Multi-Host Bubble Networks

$
0
0

Thanks.......

 

Is a vDS essential, or can this be achieved via vSS?

 

It still seems quite a risky configuration because you really REALLY don't want to mis-configure the VRFs!

 

We are indeed limited on physical nics, so our "Bubble" port groups would need to share physical nics with the operational production VMs.

Re: ingress and egress traffic shaping

$
0
0

just to add, Traffic Shaping is applied all the time regardless of available bandwidth.  Network I/O Control is prefered traffic shaping.

Re: DR Testing - Multi-Host Bubble Networks

$
0
0

Hi Bayu

 

wondered if you'd be able to reply to my last question?

 

Is a vDS essential, or can this be achieved via vSS?

Re: DR Testing - Multi-Host Bubble Networks

$
0
0

Sorry I missed your last reply.

Yes you can do it via vSS without issue.

Re: DR Testing - Multi-Host Bubble Networks

$
0
0

Just want to mention that my first reply was talking about SRM Test Networks.

But the methods (VRF or Network Virtualisation) would also applicable without SRM

Viewing all 4189 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>