• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

SAN Question

I would have to say no, it's not really what it's for. We could help you better with a more detailed description of your problem.
 
Actually it could be depending on how the SAN works - if you have a disk group in a virtualized one with sufficient disks, then should not be an issue as the IO is distributed over the multiple disks. If your SAN requires you to phyiscally partition the physical disks for different RAID groups, then this may be an issue. Also comes down to what type of SAN it is in terms of interface speed and cache, as well as the amount/type of VMs you're running.
 
it would be an iSCSI SAN from HP Lefthand and a server would be using it's disks to host virtual machines. i'm thinking a 2 TB one with max 10 VMs all running different things like database, file server, terminal server, etc. let me know if you need any more details, thanks.
 
I have NOT tried this. But I have the bad feeling that it won't be fast enough with a 1 Gbps connection. Maybe with a 10 Gbps connection.

The problem isn't how many disks or how fast they are. It's that network connection.
 
more details please.

hyper-v or vmware? HA/FT/DRS/Vmotion necessary?

dual gigabit should be fine but the modern lefthand (hp servers) support the $500 dual 10gbe nic; which can use a copper (15m) $150 cable with no gbic's. I'm not sure if you can run that switchless though.

if you don't need the fancy high availability or vmotion/svmotion you may consider just doing DAS 16bay dl380/5 G6.

if you are talking sql server; you do have the option of running it in a vm and using the microsoft iscsi initiation in the vm with mpio (dual gig-e) for db/logs to the lefthand to give it a boost at the cost of becoming much less flexible.

raid-10 all the way.

Tell us more about your budget for both hardware and software. I'd suspect you could easily spend more on software licensing (to do it proper) than hardware.




 
this particular machine will be hyperV.

what is FT/DRS?

yeah i don't have 10 Gb nics, and i'm not sure if the copper solution will work here.

as for the budget, i don't really know yet since we are still discussing options but it's probably between 15 and 25K
 
yes, it would be one physical server with 2 quad core processors, probably at least 2.5 GHZ and 1 hp lefthand unit (probably SAS drives since SATA is slower)
 
Any reason why you are going with a SAN even for just 1 server? Especially for 2TB only, seems like a DAS solution might be just as good for half the cost.
 
i'd just go for the dl385 G6 or dl380 G6 with 16 drive config,48gb, das, 16 SAS drives and call it a day personally.

lefthand is too expensive solution.

heck an SAS based MSA2000sa G2 makes more sense than a lefthand unless you have managed to find one somewhere for stupid cheap.

 
good questions.. fair enough. what capabilities are not included with the DAS? can they replicate over different buildings/subnets?

Emulex - how much would you say these run approx? dl385 G6 or dl380 G6 with 16 drive config or the MSA2000sa G2?
 
alot less than a $30-60K lefthand setup plus proper switching (redundant gigabit quality setup RPS) plus base servers. Keep in mind the reason you don't see lefthand's on the websites is that they are all CTO. base+software+support. (hence the ZERO stock)

Whatever you do - skip SATA for this task. It is mind numbingly slow at handling vm's.

if you are going to replicate over a lan you could probably get away with 1 sata storage and 1 mainline storage but that never works out so hot if you actually lose your primary location and have to end up using the backup rig.

not sure on pricing - you can probably go to any website and build out the machine you want. keep in mind the lefthand is just standard 16 or 24 port hp server nowadays. they used to run dl320s then moved to more modern such as the 16 port dl380 g6.

the ms2012sa or msa2324sa is pretty cool for small # of machines as clustered storage. SAS cheapness; better then FC connectivity (sas port is x4 lane x 3G). iirc these won't be vmware certified for a few more months only the G1 is. That might present you with a problem if you encounter any issues. Microsoft - no problemo. Vmware - works. But if you plan to use a vmware solution there is no way in heck i'd run anything not on the HCL.

I think you should look at some other options - perhaps software based (Starwind/doubletake) for replication if you are budgeting below $100K for a site to site full DR [2 of everything] using quality hardware components.

You should probably put together a realistic budget with your necessary goals and have a sales person go over a total solution. That is what they get paid to do. Usually they'd get an SE on the line to iron out the challenges.

I'm sure there are open-source or near free ways to accomplish this as well(software wise) - enterprise doesn't run like that - they need to be able to hire another mcse/vcp/whatever to replace you if you get hit by a bus and continue running.
 
Back
Top