SAN for Hyper-V

I

iBeech

Hey all,

I'm wondering if anyone can recommend a SAN configuration for my requirements:

-25 physical hosts, capable of hosting 40 virtual machines each (1000 VMs)
-VMs will be deployed via SCVMM
-Usage is for development / testing. Logging in, installing applications,
some database, and scrapping said VM (standard dev work)
-None of the VMs are mission critical, so where good performance is required
for day to day use, performance is not critical.

We are after 25TB storage, expandable to over 50TB for the future.

I have been recommended the NetApp FAS2040. But i'm unsure if it is up to
the task.

Does anyone have any experiance / recommendations?

Regards,
Tom
 

My Computer

R

RCan

Hi iBeech,

you will hear here different answers as every SAN admin has their own
preferences there. Also it depends on several other unknown factors to
decide a SAN hardware vendor plattform.
Generally you should carefull with the sizing of your storage subsystem as
this is mostly one of the critical areas where performance can be degraded
for all running VMs. golden rule here is = as much as possible spindles :-)
Before you get an correct answer to your question you need to know the
workload (I/O) requirements of all your 1000VMs and also the right
distrubution. 40 VM per host is generally no problem for Hyper-V when you
take care of the performance for SAN connection, I would prefer 10GBit if
ISCSI and min. multiple 8GBit SAN FC adapters. The general decision between
Fibre Channel and ISCSI SAN environments should also be worked out. Do you
have a SAN network already in place ? if yes, what are your SAN switches
capable, 2-4 or 8GBit ? If not, ISCSI with 10GBit possible ?

NetApp is fully supported by Hyper-V and it really depends if you would
require all the featureset from Netapp as you also pay for each license.
I personally have have also very good experience with ISCSI boxes like
Equalogic (Dell) or HP Lefthand .... With the falling price barometer in
10GBit segment ISCSI is becoming really in scope also for high workload
scenarios like yours.

Hope that helps a bit

Regards
Ramazan

"iBeech" <[email protected]> wrote in message
news:[email protected]

> Hey all,
>
> I'm wondering if anyone can recommend a SAN configuration for my
> requirements:
>
> -25 physical hosts, capable of hosting 40 virtual machines each (1000 VMs)
> -VMs will be deployed via SCVMM
> -Usage is for development / testing. Logging in, installing applications,
> some database, and scrapping said VM (standard dev work)
> -None of the VMs are mission critical, so where good performance is
> required
> for day to day use, performance is not critical.
>
> We are after 25TB storage, expandable to over 50TB for the future.
>
> I have been recommended the NetApp FAS2040. But i'm unsure if it is up to
> the task.
>
> Does anyone have any experiance / recommendations?
>
> Regards,
> Tom
 

My Computer

I

iBeech

Thanks for your reply!

we're lucky, as we dont currently have an extensive SAN infrastructure.

We would prefer to go with NetApp, for several reasons.


In our current quote, we've got 8Gb optical switches, but only 4Gb FCs. So i
guess the first thing is to upgrade them to 8Gb.

As i previously stated, we dont need all the fancy software features. The
solution is to exclusivley host VMs.

I ran some IOPS averages on some of my VMs, they peak at around 150 IOPS.
But average about 2-3 IOPS over a day.

How many disks would you recommend?

Thanks.

"RCan" wrote:

> Hi iBeech,
>
> you will hear here different answers as every SAN admin has their own
> preferences there. Also it depends on several other unknown factors to
> decide a SAN hardware vendor plattform.
> Generally you should carefull with the sizing of your storage subsystem as
> this is mostly one of the critical areas where performance can be degraded
> for all running VMs. golden rule here is = as much as possible spindles :-)
> Before you get an correct answer to your question you need to know the
> workload (I/O) requirements of all your 1000VMs and also the right
> distrubution. 40 VM per host is generally no problem for Hyper-V when you
> take care of the performance for SAN connection, I would prefer 10GBit if
> ISCSI and min. multiple 8GBit SAN FC adapters. The general decision between
> Fibre Channel and ISCSI SAN environments should also be worked out. Do you
> have a SAN network already in place ? if yes, what are your SAN switches
> capable, 2-4 or 8GBit ? If not, ISCSI with 10GBit possible ?
>
> NetApp is fully supported by Hyper-V and it really depends if you would
> require all the featureset from Netapp as you also pay for each license.
> I personally have have also very good experience with ISCSI boxes like
> Equalogic (Dell) or HP Lefthand .... With the falling price barometer in
> 10GBit segment ISCSI is becoming really in scope also for high workload
> scenarios like yours.
>
> Hope that helps a bit
>
> Regards
> Ramazan
>
> "iBeech" <[email protected]> wrote in message
> news:[email protected]

> > Hey all,
> >
> > I'm wondering if anyone can recommend a SAN configuration for my
> > requirements:
> >
> > -25 physical hosts, capable of hosting 40 virtual machines each (1000 VMs)
> > -VMs will be deployed via SCVMM
> > -Usage is for development / testing. Logging in, installing applications,
> > some database, and scrapping said VM (standard dev work)
> > -None of the VMs are mission critical, so where good performance is
> > required
> > for day to day use, performance is not critical.
> >
> > We are after 25TB storage, expandable to over 50TB for the future.
> >
> > I have been recommended the NetApp FAS2040. But i'm unsure if it is up to
> > the task.
> >
> > Does anyone have any experiance / recommendations?
> >
> > Regards,
> > Tom
>
 

My Computer

R

RCan

Hi IBeech,

yes if possible as you invest now in a SAN I personally would definitly
prefer 8GBit :-)

with guerantee the question can be answered by Netapp my better than here
you will get an answer.
But as a general note I say always an current FC/ISCSI SAN disk should/can
provide max. ~100-180 IOPS/spindles.
Of course this also depends on several other factors (speed, RAID...) but
for an JBOD environment this can be used for calculation...

Provide all the known details about IOPS per VM and Netapp can run an
correct sizing.

PS : I don't want to work against NetApp here but just want to understand
what are the reasons to go with them as they mostly are much more expensive
than others are.

Regards
Ramazan

"iBeech" <[email protected]> wrote in message
news:[email protected]

> Thanks for your reply!
>
> we're lucky, as we dont currently have an extensive SAN infrastructure.
>
> We would prefer to go with NetApp, for several reasons.
>
>
> In our current quote, we've got 8Gb optical switches, but only 4Gb FCs. So
> i
> guess the first thing is to upgrade them to 8Gb.
>
> As i previously stated, we dont need all the fancy software features. The
> solution is to exclusivley host VMs.
>
> I ran some IOPS averages on some of my VMs, they peak at around 150 IOPS.
> But average about 2-3 IOPS over a day.
>
> How many disks would you recommend?
>
> Thanks.
>
> "RCan" wrote:
>

>> Hi iBeech,
>>
>> you will hear here different answers as every SAN admin has their own
>> preferences there. Also it depends on several other unknown factors to
>> decide a SAN hardware vendor plattform.
>> Generally you should carefull with the sizing of your storage subsystem
>> as
>> this is mostly one of the critical areas where performance can be
>> degraded
>> for all running VMs. golden rule here is = as much as possible spindles
>> :-)
>> Before you get an correct answer to your question you need to know the
>> workload (I/O) requirements of all your 1000VMs and also the right
>> distrubution. 40 VM per host is generally no problem for Hyper-V when you
>> take care of the performance for SAN connection, I would prefer 10GBit if
>> ISCSI and min. multiple 8GBit SAN FC adapters. The general decision
>> between
>> Fibre Channel and ISCSI SAN environments should also be worked out. Do
>> you
>> have a SAN network already in place ? if yes, what are your SAN switches
>> capable, 2-4 or 8GBit ? If not, ISCSI with 10GBit possible ?
>>
>> NetApp is fully supported by Hyper-V and it really depends if you would
>> require all the featureset from Netapp as you also pay for each license.
>> I personally have have also very good experience with ISCSI boxes like
>> Equalogic (Dell) or HP Lefthand .... With the falling price barometer in
>> 10GBit segment ISCSI is becoming really in scope also for high workload
>> scenarios like yours.
>>
>> Hope that helps a bit
>>
>> Regards
>> Ramazan
>>
>> "iBeech" <[email protected]> wrote in message
>> news:[email protected]

>> > Hey all,
>> >
>> > I'm wondering if anyone can recommend a SAN configuration for my
>> > requirements:
>> >
>> > -25 physical hosts, capable of hosting 40 virtual machines each (1000
>> > VMs)
>> > -VMs will be deployed via SCVMM
>> > -Usage is for development / testing. Logging in, installing
>> > applications,
>> > some database, and scrapping said VM (standard dev work)
>> > -None of the VMs are mission critical, so where good performance is
>> > required
>> > for day to day use, performance is not critical.
>> >
>> > We are after 25TB storage, expandable to over 50TB for the future.
>> >
>> > I have been recommended the NetApp FAS2040. But i'm unsure if it is up
>> > to
>> > the task.
>> >
>> > Does anyone have any experiance / recommendations?
>> >
>> > Regards,
>> > Tom
>>
 

My Computer

Top