This sounds like a problem that we're having and we are using Broadcom NICs.
We are running Hyper-v on Win 2008R2 on a dell R610 and have Windows Server
2003 SP2 guests. The symptoms of the problem are virtual server guests loose
network connectivity randomly (once every week or so) and some perform so
poorly after loosing network connectivity that they have to be forced to shut
down rather than rebooted properly.
From what information I can find there seems to be a link with Broadcom
adapters. Some suggest disabling 'IPv4 Large Send Offload' on the physical
adapters on the host which we have done, however we still get servers falling
over. Another suggestion was to disable 'IPv4 Large Send Offload' on the
guest virtual adapters (inside the guest Win server 2003 OS) but this caused
servers to fall over every few hours. The only errors I can find before the
guests loos network connectivity is 'Event ID 5 - The miniport 'Microsoft
Virtual Machine Bus Network Adapter' hung.' followed by 'Event ID 4 - The
miniport 'Microsoft Virtual Machine Bus Network Adapter' reset.'
We have a call open with Microsoft regarding this issue but we haven't got
"Brad Bird (MVP)" wrote:
> Hello Tiago,
> I am curious to know if the suggestions from RCan helped you resolve this.
> This sounds suspiciously like an issue I had at University of Ottawa. To
> band-aid the issue, I would console to the VM and (disable/enable) the NIC
> in the guest to reset the IP stack. This was on IBM servers.
> At the time, we thought this was due to Broadcom firmware from IBM information
> but the problem was never completely solved and since I don't work there
> anymore, I don't know to this day if it was...
> I realize your scenario is not the same nor do you have the same hardware.
> I hope the band-aid helps save you time if it resolves your issue faster
> than a reboot...
> Do you have any network stats being monitored? This is where I would start
> > Hello,
> > I`m having some issues on my Hyper-V environment.
> > My environment:
> > Windows 2008 R2 Enterprise Edition running on Dell PowerEdge R710
> > with
> > 32GB RAM.
> > 3 node cluster set up with quorum disk, have default type cluster
> > disks
> > and CSV disks.
> > 1 NIC host dedicated, 1 NIC set to trunk with two VLANs attached
> > to
> > virtual switch 2, 2 NICs as link aggregation attached to virtual
> > switch 1
> > 2 fiber channel HBAs attached to Brocade switches and Dell EMC
> > CX300
> > storage.
> > Around 52 LUNs associated to the cluster
> > Some LUNs contains VHDs and others are RAW disks
> > All guests NICs are synthetics
> > When I had a 2 node cluster, some guests just loose network
> > connection, even removing virtual switch connection with the guest NIC
> > and attaching it back, doesn`t resolves the problem, need to restart
> > the guest to get back to network. Sometimes the guests just hang.
> > After adding a 3rd node to the cluster, the hosts started to
> > restart after blue screens ( got Overlapped I/O message on 1 of them
> > ), sometimes the host don`t restart but it simply restarts the guests
> > running on it.
> > I`ve been searching for 1 week already and didn`t found nothing
> > that really helps.
> > Anyone have any clue?
> > If need more info, please tell me. Will be happy to help you to
> > help me :-D
> > Thanks for your time