So, the calculator is now out in the wild to help you size your Exchange 2013 servers to cope with messaging traffic for your new implementation. But if you’ve been reading up on the new sizing, then you’ll know things have changed a fair bit. In short, it’s more CPU intensive, needs more RAM but has lower IO requirements.
The first point of note is simple – if you’re looking to do a direct comparison between a deployment on Exchange 2010 and Exchange 2013, then you might find yourself slightly disappointed and won’t necessarily see the benefits.
A case in point is smaller deployments. Using the calculator, I’ve looked at a (reasonably common) Exchange 2010 virtualized deployment sizing and directly compared it to the requirements for Exchange 2013. A really small virtual deployment for a few hundred users would take maybe 8GB RAM, need a couple of virtual CPUs and IOPs would not be an issue because the supporting SAN has more IO than is needed.
Deployments like this – the kind where you have 2-3 Hypervisor hosts sitting on a NetApp, will not see major hardware benefits with Exchange 2013, because they don’t have any characteristic that taxes the hardware to any limits already. The only major change between the sizing between Exchange 2010 and 2013 was the memory, which jumped to 24GB.
So, the rule of thumb for the small deployment on a virtual infrastructure is simple – make sure the Hypervisor host has more memory. If you’d have specified 64GB RAM for each Hypervisor host, then you’ll probably need 96GB RAM. It won’t cost a lot more, but you’ll get Exchange 2013 features. Just remember, you’ll probably want an Office Web Apps server too.
However organizations considering such a small deployment might want to weigh up the costs of moving to Exchange Online (starting at £2.60 per user a month!)
As you scale up, you’ll really start to see the benefits of Exchange 2013. In Exchange 2010, the option of JBOD was there and still is, however re-seeding databases after failed disks certainly put many off. Therefore, a good, safe option was to use RAID 10 or RAID 1 sets for Databases and Logs, using up to around 2TB MDL SAS/SATA direct-attached storage.
This is where it starts to get interesting with Exchange 2013. Pretty much any organization considering a DAG should now really use JBOD, thanks to Auto Reseed – a feature that replicates the easy automation so many love about hardware raid (i.e. online spares, automatic re-build of disks after a failure).
Because JBOD probably should be your starting point, you get to save money on disks. In Exchange 2010 you could have 2TB disks with 1 database/log set per disk. You can now have up to 4TB disks, with multiple database/log sets per disk with Exchange 2013 JBOD. Therefore, we’ve got a great opportunity to use a LOT less disks.
Because you’re using less disks, you can in many cases look to design for building-block server storage. No expensive DAS arrays hanging off the back, just something like HP Proliant DL380 G8s with 12 4TB disks per server. 2x4TB disks as system volumes and the rest for JBOD.
My comparison for you here is a slightly larger environment, 2000 mailboxes running on physical hardware.
On Exchange 2010, the theoretical design went for multi-role servers using RAID 10 with a single-site DAG with two nodes (each with 6 cores, 16GB RAM) and 2 database copies. This results in 110 disks across the two nodes. We’d backup Exchange using VSS to more storage somewhere else.
On Exchange 2013, we re-think the design slightly and go for JBOD with again, a single site DAG with 4 nodes (same 6 cores and 16GB RAM per node) and 3 database copies. We dispense with traditional backups. This results in 36 disks.
Yes, you read that right. 36 disks in Exchange 2013, versus 110 disks in Exchange 2010. As well as using 74 fewer disks we also get an additional database copy and the ability to dispense with expensive, traditional backups.
As you can see, that’s a great result:
Exchange 2013 for small deployments in virtual environments is fine. You’ll need to use a bit more RAM, but it’s still viable, however Exchange Online certainly becomes slightly more attractive. Exchange 2013 for larger deployments versus Exchange 2010.. It’s a no brainer!
Download the Exchange 2013 Server Role Requirements Calculator from here. Just remember – think different!
8 thoughts on “Exchange 2013 Server Role Requirements Calculator – What does it mean for Exchange implementations?”
Pingback: The UC Architects » Episode 22: A Game of Clouds
i noticed one small thing when using calculator with larger audience(10,000+) when i want to use jbod with multiple copies per volume(3+) it doesnt let me use cheapest disks(sas/sata 7200) for this error to go away i have to change it to sas 10,000 rpm drives.
so i guess theres a limit(and the calculator knows it) to the iops of these “cheap disks”
Yes there is limit to the IOPs of cheap disks. The IOPs limits haven’t changed; but the IOPs required by Exchange 2013 have decreased. As per my example above you may need to consider more cheap “building blocks” than with Exchange 2010; but overall you should see you need less disks than Exchange 2010 either for traditional RAID or JBOD.
My example above used SAS/SATA 7,200 RPM disks.
This is just fantastic! Thanks for sharing. How did you carve up the disks? Is it one big RAID 5 set?
Exchange 2010 = RAID 1 or RAID 10. No RAID 5 for the MDL SAS/SATA disks.
Exchange 2013 as mentioned above has Auto-Reseed, making JBOD (No RAID) viable as the disk replacement procedures are similar to hardware RAID.
I just looked up auto-reseed. I had no idea it was this cool! Thanks Steve!
So is this calculator Version useless for Exchange 2010?
The Exchange 2010 Mailbox Server Role Requirements Calculator is for Exchange 2010 only; the Exchange 2013 Server Role Requirements calculator is for Exchange 2013 only. I made the comparisons using the respective versions.
Comments are closed.