08-01-2011 09:55 PM
We have an existing Dell/EMC CX3-10 SAN environment that utilizes a Brocade Fibre switch that has been in place and working fine for a couple of years now. We have decided to move many of our App, Web, and management servers to a virtualization, ESXi VMWare 4.1. I’m needing to find information on best practices for setting up the storage environment for VMware, zoning, setting up the LUNs, etc so that I can set this up right the first time. I do have the necessary available ports in the Fibre switch, along with the necessary licensing, as well as more than 6Tb disk space in the SAN for this project. We purchased 3 Dell R710 servers with dual Qlogic 2460 4GB Optical Fiber Channel HBAs in each.
If anyone has any suggestion, thoughts, post, blogs, white papers or whatever would be great. I do have some experience working within the storage environment but I’m not guru so I’d be grateful for anything.
08-02-2011 12:09 AM
this is not easy to answer because it depends on your application IO profile.
Primary you have the need to answer your self the question if you would like to use Raw Device Mappings or not.
Boths ways have advantages and disadvantages and are driven by your team structure and the size.
What I try to say is ask your self how the Raw device mapping will affect your work if you have to handel many different servers or a single ESX host.
VMWare provides some guide lines how to setup the infrastructure.
Some recommendation are not clearly defind and can cause issues if you work big data stores and big LUNs.
I have attached an guide from VMWare:
I hope this helps.
08-03-2011 06:46 AM
Andreas...Thanks for the reply and the PDF, everything helps. At this time I had not planned on using RDM, just our SAN strorage. However its a good thing to review just in case I end up needing it this project.
08-03-2011 07:14 AM
if you use Datastores have an eye on SCSI reservations which can cause performance issues if your LUNs are too big.
Because a bigger LUN can host more guests which are affecting each other specially if you use ESX SNAPs.
I would appreciate if you mark the thread as answered or helpful.
08-03-2011 09:12 AM
I agree with Andreas in the fact you have to be aware of your SCSI reservations and I will also toss in the fact that even though 2 TB LUNS sound great for a Datastore they are not always the best way to go. We found for our CX3-240 to create 1 TB Datastores and spread out the IO.
Hope this helps!
08-03-2011 11:19 AM
Thanks for the feeback.. I had actually considered one large LUN idea, but after more research I'm not going to go that route. I like the idea of 1LUN segments, gives me a lot of flexiablity.
How about this... Creating 3 small LUNs for the VMHost machines to boot to, then create six 1TB LUNS and assign 2 of those to each VM Host. That should help if I run into SCSI reservations issues and if not I've got scalability too..
08-03-2011 11:27 AM
I think that sounds very reasonable. I would still watch my SCSI reservres just to be sure you are not having any issues. Were using our EMC for our test environment now so breaking up the LUNs into 1 TB really helped. You most likely know this already but if your going to access the datastores across all your VM's make sure your LUN numbers are the same...common knowledge but sometimes missed and can cause issues down the road for sure =)
We currently have a Compellent SAN and have followed the same methodology with 1 TB LUNs and seems to work great for us. Good luck!
08-03-2011 12:14 PM
We are planning on starting with our Test environment and adding several Production servers once we know all is working as expected. It's actually been some time since I've had to work within our SAN, I had forgotten about LUN numbers needing to be the same, thanks for the reminder. That actually brings up a new question though. When you say access the datastores across all the VMs are you talking about pooling the storage in a cluster type fashion, or just simply make them visible? I want to make sure I can utilize VMotion, fail over, or other type DR solution too. Which I image I may need to pool the storage for that..correct?
08-03-2011 01:29 PM
When you say access the Datastores across all the VMs are you talking about pooling the storage in a cluster type fashion, or just simply make them visible?
Another heads up will be changing the HOST ID number (which is the LUN number presented) you can change that by right clicking the server under storage groups and selecting properties then click on the LUN tab you will see the LUN ID and the HOST ID...click on the HOST ID number and it will give you a pull down option to change it.
For example I have LUN ID number 10 and I want the server to see it as LUN 10 so I change the HOST ID to 10. Now when I access the storage via vSphere I should see LUN 10. You will want to make sure that all hosts that map to that datastore see it as LUN 10 otherwise you will have problems.
Hope that helps a little bit....
08-03-2011 02:10 PM
Good deal.. Some very help information, Thanks!
I did have one other question, which I may need to post on EMC since its more related there, but I'll give it a shot.
We have added a new DAE enclosure with 15 600Gb 15K FC drives. That storage is suppose to be divided up between two projects, virtualization and expansion of drive space of other servers currently connected.
Now that I have an idea of how to break up the LUNs I want to make sure I minimize my lose of physical drive space due to RAID Array, parity in this case because we’d like to use RAID 5. With the idea of creating 6 1TB drives and assigning 2 to each VMware host, I’m not sure I can create 6 1TB LUNs and then configure RAID Array 5 to that and still only lose one disk for parity, or can I? If I can, am I looking for trouble by doing that, does that negate my attempt to reduce SCSI reservations or other issues I haven’t read about yet?