As part of Microsoft IaaS Foundations series, this document provides an overview of Microsoft Infrastructure as a Service Design patterns that you can use drive your own design exercises. These patterns are field tested and represent the results of Microsoft Consulting Services experiences in the field.
Table of Contents
1.2 IaaS Reference Architectures
1.3 Windows Hardware Certification
2 Software Defined Infrastructure Pattern Overview
3 Non-Converged Infrastructure Pattern Overview
4 Converged Infrastructure Pattern Overview
5 Hybrid Infrastructure Pattern Overview
This document is part of a yet to be completed collection of documents that will constitute the Microsoft Infrastructure as a Service Foundations series. These documents will provide information about design elements and recommended options for a Microsoft Infrastructure as a Service infrastructure. The infrastructure could support on-premises IaaS, public cloud provider (Azure) IaaS, hosting service provider IaaS, or any combination of these as part of a hybrid IaaS solution. These documents are intended for IaaS cloud services architects and system designers, but will be of interest to anyone seeking detailed information on what Microsoft has to offer in the Infrastructure as a Service space.
Contributors:
Adam Fazio – Microsoft
David Ziembicki – Microsoft
Joel Yoker – Microsoft
Artem Pronichkin – Microsoft
Jeff Baker – Microsoft
Michael Lubanski – Microsoft
Robert Larson – Microsoft
Steve Chadly – Microsoft
Alex Lee – Microsoft
Yuri Diogenes – Microsoft
Carlos Mayol Berral – Microsoft
Ricardo Machado – Microsoft
Sacha Narinx – Microsoft
Tom Shinder – Microsoft
Jim Dial – Microsoft
1 Introduction
The goal of the Infrastructure-as-a-Service (IaaS) Foundations series is to help enterprise IT and cloud service providers understand, develop, and implement IaaS infrastructures. This series provides comprehensive conceptual background, a reference architecture and a reference implementation that combines Microsoft software, consolidated guidance, and validated configurations with partner technologies such as compute, network, and storage architectures, in addition to value-added software features.
The IaaS Foundations Series utilizes the core capabilities of the Windows Server operating system, Hyper-V, System Center, Windows Azure Pack and Microsoft Azure to deliver on-premises and hybrid cloud Infrastructure as a Service offerings.
As part of Microsoft IaaS Foundations series, this document provides an overview of Microsoft Infrastructure as a Service Design patterns that you can use drive your own design exercises. These patterns are field tested and represent the results of Microsoft Consulting Services experiences in the field.
1.1 Scope
The scope of this document is to provide customers with the necessary guidance to develop solutions for a Microsoft private cloud infrastructure in accordance with the IaaS patterns that are identified for use with the Windows Server 2012 R2 operating system. This document provides specific guidance for developing fabric architectures (compute, network, storage, and virtualization layers) for an overall private cloud solution. Guidance is provided for the development of an accompanying fabric management architecture that uses System Center 2012 R2.
1.2 IaaS Reference Architectures
Microsoft Private Cloud programs have two main solutions as shown in Figure 1. The Microsoft Infrastructure as a Service Foundations series focuses on the open solutions model, which can be used to service the enterprise and hosting service provider audiences.
Figure 1 Branches of the Microsoft Private Cloud
Small- or medium-sized enterprises should plan a reference architecture that defines the requirements that are necessary to design, build, and deliver virtualization and private cloud solutions, including hosting service provider implementations.
Figure 2 shows examples of these reference architectures.
Figure2 Examples of reference architectures
Each reference architecture combines concise guidance with validated configurations for the compute, network, storage, and virtualization layers. Each architecture presents multiple design patterns to enable the architecture, and each design pattern describes what we consider to be minimum requirements for each solution.
1.2.1 Microsoft IaaS Architecture Fabric Design Patterns
Windows Server 2012 R2 utilizes innovative hardware capabilities, and it enables what were previously considered advanced scenarios and capabilities from commodity hardware. These capabilities have been summarized into initial design patterns described in the Microsoft Infrastructure as a Service Foundations series. Identified patterns include the following infrastructures:
- Software-defined infrastructure
- Non-converged infrastructure
- Converged infrastructure
Each design pattern guide outlines the high-level architecture, provides an overview of the scenario, identifies technical requirements, outlines all dependencies, and provides guidelines as to how the architectural guidance applies to each deployment pattern. Each pattern also includes an array of fabric constructs in the categories of compute, network, storage, and virtualization. Each pattern is outlined in this guide with an overview of the pattern and a summary of how each pattern leverages each feature area.
The following features are common across each of the design patterns:
Required Features | Optional Features |
Dedicated fabric management hosts | Addition of single root I/O virtualization |
10 gigabit Ethernet (GbE) or higher network connectivity | Addition of a certified Hyper-V extensible virtual switch extension |
Redundant paths for all storage networking components (such as redundant serial attached SCSI (SAS) paths, and Multipath I/O (MPIO) for Fibre Channel and SMB Multichannel where appropriate) | |
SMI-S or SMP–compliant management interfaces for storage components | |
Remote direct memory access (RDMA) network connectivity (RoCE, InfiniBand or iWARP) | |
Shared storage |
The following table outlines the Windows Server 2012 R2 features and technologies that are common to all patterns:
Windows Server 2012 R2 Feature | Key Scenarios |
Increased Virtual Processor to Logical Processor ratio | Removal of previous limits of 8:1 processor ratios for server workloads and 12:1 processor ratios for client workloads. |
Increased virtual memory and Dynamic Memory | Supports up to 1 TB of memory inside virtual machines. |
Supports virtual machine guest clusters by using a shared virtual hard disk, iSCSI connections, or the Hyper-V Fibre Channel adapter to connect virtual machines to shared storage. | |
A virtual Ethernet switch that allows filtering, capturing, and forwarding extensions that are to be added by non-Microsoft vendors to support additional virtual-switch functionality on the Hyper-V platform. | |
Provides the ability to apply updates to running failover clusters through coordinated patching of individual failover-cluster nodes. | |
Supports migrating virtual machines without using shared storage, memory compression, or the SMB 3.0 protocol | |
Provides the ability to assign a network adapter that supports single-root I/O virtualization (SR-IOV) directly to a virtual machine. | |
Supports native 4K disk drives on hosts. | |
Provides the network-boot capability on commodity hardware by using an iSCSI boot–capable network adapter or a software boot loader. | |
Windows Server 2012 R2 introduces Generation 2 virtual machines, which support new functionality on virtual machines such as UEFI firmware, PXE boot, and Secure Boot. | |
Supports VHDX format for disks that are up to 64 TB in size and shared virtual hard disks | |
Supports switch-independent and switch-dependent load distribution by using physical and virtual network connections. | |
Provides hardware support for converged fabrics, which allows bandwidth allocation and priority flow control. |
Table 1 Windows Server 2012 R2 features and key scenarios applicable to all patterns
1.3 Windows Hardware Certification
In each of the following patterns, it is mandatory that each architecture solution pass the following validation requirements:
- Windows hardware certification
- Failover-clustering validation
- Clustered RAID controller validation (if a non-Microsoft clustered RAID controller is used)
These rule sets are described in the following subsections.
1.3.1 Windows Hardware Certification
Hardware solutions must receive validation through the Microsoft “Certified for Windows Server 2012 R2” program before they can be presented in the Windows Server Catalog. The catalog contains all servers, storage, and other hardware devices that are certified for use with Windows Server 2012 and Hyper-V.
The Certified for Windows Server 2012 R2 logo demonstrates that a server system meets the high technical bar set by Microsoft for security, reliability, and manageability, and any required hardware components that support all of the roles, features, and interfaces that Windows Server 2012 R2 supports.
The logo program and the support policy for failover-clustering solutions require that all the individual components that make up a cluster configuration earn the appropriate "Certified for" or "Supported on” Windows Server 2012 R2 designations before they are listed in their device-specific categories in the Windows Server Catalog.
For more information, open the Windows Server Catalog. Under Hardware Testing Status, click Certified for Windows Server 2012 R2. The two primary entry points for starting the logo-certification process are Windows Hardware Certification Kit (HCK) downloads and the Windows Dev Center Hardware and Desktop Dashboard.
Validation requirements include failover-clustering validation and clustered RAID controller validation, as described in the following subsections.
1.3.2 Failover-Clustering Validation
For Windows Server 2012 R2, failover clustering can be validated by using the Cluster Validation Tool to confirm network and shared storage connectivity between the nodes of the cluster. The tool runs a set of focused tests on the servers that are to be used as nodes in a cluster, or are already members of a given cluster. This failover-cluster validation process tests the underlying hardware and software directly, and individually obtains an accurate assessment of whether the failover cluster has the ability to support a given configuration.
Cluster validation is used to identify hardware or configuration issues before the cluster enters production. This helps make sure that a solution is truly dependable.
In addition, cluster validation can be performed as a diagnostic tool on configured failover clusters. Failover clusters must be tested and they must pass the failover-cluster validation to receive customer support from Microsoft Customer Support Services (CSS).
1.3.3 Clustered RAID Controller Validation
Clustered RAID controllers are a relatively new type of storage interface card that can be used with shared storage and cluster scenarios. RAID controllers that are set up across configured servers provide shared storage, and the clustered RAID controller solution must pass the clustered RAID controller validation.
If the solution includes a clustered RAID controller, this validation requirement includes Windows Licensing.
1.4 Windows Licensing
The IaaS architectures use the Windows Server 2012 R2 Standard or Windows Server 2012 R2 Datacenter.
The packaging and licensing for Windows Server 2012 R2 have been updated to simplify purchasing and reduce management requirements, as shown in the following table. The Windows Server 2012 R2 Standard and Datacenter editions are differentiated only by virtualization rights—two virtual instances for the Standard edition, and an unlimited number of virtual instances for the Datacenter edition.
For more information about Windows Server 2012 R2 licensing, see the Windows Server 2012 R2 Datasheet or Windows Server 2012: How to Buy.
For information about licensing in virtual environments, see Microsoft Volume Licensing Brief: Licensing Microsoft Server Products in Virtual Environments.
2 Software Defined Infrastructure Pattern Overview
The Software Defined Infrastructure pattern (previously referred to as Continuous Availability over Server Message Block (SMB) Storage) supports deployments in Windows Server 2012 R2 that use Hyper-V and Failover Clustering. Continuous availability and transparent failover are delivered over a Scale-Out File Server cluster infrastructure, and SMB shared storage is provided by using a converged hardware configuration and native capabilities in the Windows Server 2012 R2 operating system. This pattern has three variations:
- Variation A: SMB Direct using shared serial attached SCSI (SAS) and Storage Spaces
- Variation B: SMB Direct using storage area network (SAN)
- Variation C: SMB 3.0-enabled storage
Note
SMB Direct is based on SMB 3.0, and it supports the use of network adapters that have remote direct memory access (RDMA) capability.
Variation A uses SMB Direct using shared SAS and Storage Spaces to provide storage capabilities over direct-attached storage technologies. This pattern combines a Scale-Out File Server cluster infrastructure with SMB Direct to provide back-end storage that has similar characteristics to traditional SAN infrastructures and supports Hyper-V and SQL Server workloads.
Figure 3 outlines a conceptual view of Variation A.
Figure 3 Conceptual view of variation A
Variation B uses SMB Direct with SAN-based storage, which provides the advanced storage capabilities that are found in storage area network (SAN) infrastructures. SAN-based storage solutions typically provide additional features beyond what can be provided natively through the Windows Server 2012 R2 operating system by using shared direct-attached “Just a Bunch of Drives” (JBOD) storage technologies. Although this variation is generally more expensive, its primary trade-offs weigh capability and manageability over cost.
Variation B is similar to Variation A. It utilizes a Scale-Out File Server cluster infrastructure with SMB Direct; however, the back-end storage infrastructure is a SAN-based storage array. In this variation, innovative storage capabilities that are typically associated with SAN infrastructures can be utilized in conjunction with RDMA and SMB connectivity for Hyper-V workloads.
Figure 4 outlines a conceptual view of Variation B.
Figure 4 Conceptual view of variation B
In Variation C, instead of using Scale-Out File Server clusters and SMB Direct, SMB 3.0-enabled storage devices are used to provide basic storage capabilities, and Hyper-V workloads utilize the SMB shared resources directly. This configuration might not provide advanced storage capabilities, but it provides an affordable storage option for Hyper-V workloads.
Figure 5 outlines a conceptual view of Variation C.
Figure 5 Conceptual view of variation C
Although the following list of requirements is not comprehensive, the Software Defined Infrastructure pattern requires the following features:
- All the common features listed earlier
- Dedicated hosts for a Scale-Out File Server cluster (for Variations A and B)
- Shared SAS JBOD storage array (required for Variation A)
- SMB 3.0-enabled storage array (optional for Variation C only)
Table 3 outlines Windows Server 2012 R2 features and technologies that are utilized in this architectural design pattern in addition to the common features and capabilities mentioned earlier.
Windows Server 2012 R2 Feature | Key Scenarios |
Assigns a certain amount of bandwidth to a given type of traffic and helps make sure that each type of network traffic receives up to its assigned bandwidth. | |
Provides storage performance isolation in a multitenant environment and mechanisms to notify you when the storage I/O performance does not meet defined thresholds. | |
Shared virtual hard disks (.vhdx file) can be used as shared storage for multiple virtual machines that are configured as a guest failover cluster. This avoids the need to use iSCSI in this scenario. | |
Enables cost-effective, optimally used, high availability, scalable, and flexible storage solutions in virtualized or physical deployments. | |
Enables the creation of virtual disks that are comprised of two tiers of storage—a solid-state drive tier for frequently accessed data and a hard disk drive tier for less-frequently accessed data. Storage Spaces transparently moves data between the two tiers based on how frequently data is accessed. | |
Supports use of SMB 3.0 file shares as storage locations for running virtual machines by using low-latency RDMA network connectivity. | |
Provides low latency SMB 3.0 connectivity when using remote direct memory access (RDMA) adapters. | |
Involves finding and removing duplication within data without compromising its fidelity or integrity. | |
Allows file servers to use multiple network connections simultaneously, which provides increased throughput and network fault tolerance. | |
Allows virtual machines to support higher networking traffic loads by distributing the processing across multiple cores on the Hyper-V host and virtual machine. |
Table 2 Windows Server 2012 R2 features and key scenarios
Key drivers that would encourage customers to select the Software Defined Infrastructure pattern include lower cost of ownership and flexibility with shared SAS JBOD storage solutions (Variation A only). Decision points for this design pattern over others focus primarily on the storage aspects of the solution in combination with the innovative networking capabilities of SMB Multichannel and RDMA.
3 Non-Converged Infrastructure Pattern Overview
The non-converged infrastructure pattern uses Hyper-V and Failover Clustering in a standard deployment with non-converged storage (traditional SAN) and a network infrastructure. The storage network and network paths are isolated by using dedicated I/O adapters. Failover and scalability are achieved on the storage network through Multipath I/O (MPIO). The TCP/IP network uses NIC Teaming.
In this pattern, Fibre Channel or iSCSI is expected to be the primary connectivity to a shared storage network. High-speed 10 gigabit Ethernet (GbE) adapters are common for advanced configurations of TCP/IP traffic.
Figure 6 provides an overview of the non-converged infrastructure pattern.
Figure 6 Non-converged design pattern
The non-converged infrastructure pattern has two connectivity variations:
- Variation A: Fibre Channel
- Variation B: iSCSI
Figure 7 outlines a conceptual view of this pattern.
Figure 7 Non-converged design pattern variations
Although the following list of requirements is not comprehensive, this design pattern uses the following features:
- All the common components listed earlier
- Fibre channel, iSCSI, or SMB 3.0-enabled SAN-based storage
- Storage-array support for offloaded data transfer (ODX) (optional)
Table 3 outlines the Windows Server 2012 R2 features that are utilized in this architectural design pattern in addition to the common features and capabilities outlined earlier.
Windows Server 2012 Feature | Key Scenarios |
Virtual machine guest clustering enhancements (iSCSI, Virtual Fibre Channel, or shared virtual hard disks) | Supports virtual machine guest clusters by using iSCSI connections or by using the Hyper-V Fibre Channel adapter to connect to shared storage. Alternatively, a shared virtual hard disk feature can be used regardless of the shared storage protocol that is used at the host level. |
Offloaded data transfer (ODX) | Support for storage-level transfers that use ODX technology (SAN feature). |
Diskless network boot with iSCSI Target Server | Provides the network-boot capability on commodity hardware by using an iSCSI boot–capable network adapter or a software boot loader (such as iPXE or netBoot/i). |
Table 3 Windows Server 2012 R2 features and key scenarios
Key drivers that would encourage customers to select this design pattern include current capital and intellectual investments in SAN and transformation scenarios that include using an existing infrastructure for upgrading to a newer platform. Decision points for this design pattern include storage investments, familiarity, and flexibility of hardware.
4 Converged Infrastructure Pattern Overview
In this context, a “converged infrastructure” refers to sharing a network topology between traditional network and storage traffic. This typically implies Ethernet network devices and network controllers that have particular features to provide segregation, quality of service (performance), and scalability. The result is a network fabric that features less physical complexity, greater agility, and lower costs than those that are associated with traditional Fiber Channel-based storage networks.
This topology supports many storage designs, including traditional SANs, SMB 3.0-enabled SANs, and Windows-based Scale-Out File Server clusters. In a converged infrastructure, all storage connectivity is network-based, and it uses a single media (such as copper). SFP+ adapters are more commonly used.
Servers for a converged infrastructure pattern typically include converged blade systems and rack-mount servers, which also are prevalent in other design patterns. The key differentiators in this pattern are how the servers connect to storage and the advanced networking features that are provided by converged network adapters (CNA). High-density blade systems are common, which feature advanced hardware options that present physical or virtual network adapters to the Hyper-V host that is supporting a variety of protocols.
Figure 8 depicts a configuration for the converged infrastructure. Note the following:
- Host storage adapters can be physical or virtual, and they must support iSCSI, Fibre Channel over Ethernet (FCoE), and optionally SMB Direct.
- Many storage devices are supported, including traditional SANs and SMB Direct–capable storage.
Figure 8Converged infrastructure pattern
Although the following list of requirements is not comprehensive, the converged infrastructure pattern uses the following features:
- All the common components listed earlier
- Fibre channel, iSCSI, or SMB 3.02-enabled SAN-based storage
- Storage-array support for ODX (optional)
Table 4 outlines Windows Server 2012 R2 features that are utilized in the converged infrastructure pattern in addition to the common features and capabilities outlined earlier.
Windows Server 2012 Feature | Key Scenarios |
Virtual machine guest clustering enhancements (iSCSI, Virtual Fibre Channel, or shared virtual hard disks) | Supports virtual machine guest clusters by using iSCSI connections or by using the Hyper-V Fibre Channel adapter to connect to shared storage. Alternatively, the shared virtual hard disk feature can be used regardless of the shared storage protocol that is used on the host level. |
Offloaded data transfer (ODX) | Support for storage-level transfers that use ODX technology (SAN feature). |
Table 4 Windows Server 2012 R2 features and key scenarios
5 Hybrid Infrastructure Pattern Overview
The Hybrid Infrastructure pattern includes reference architectures, best practices, and processes for extending a private cloud infrastructure to Microsoft Azure or a Microsoft service-provider partner for hybrid cloud scenarios such as:
- Extending the data center fabric to the cloud
- Extending fabric management to the cloud
- Hybrid deployment of Microsoft applications
The overall Microsoft “Cloud OS” strategy supports this architecture and approach. For more information about this strategy, see:
The key attribute of the Cloud OS vision is the hybrid infrastructure, in which you have the option to leverage an on-premises infrastructure, a Microsoft Azure infrastructure, or a Microsoft hosting partner infrastructure. Your IT organization is a consumer and a provider of services, enabling workload and application development teams to make sourcing selections for services from all three of the possible infrastructures or create solutions that span them.
The following diagram illustrates the infrastructure level, the cloud service catalog space, and examples of application scenarios and service-sourcing selections (for example, a workload team determining if it will use virtual machines that are provisioned on-premises, in Microsoft Azure, or by a Microsoft hosting partner.)
Figure 9 Hybrid IT infrastructure
By having a hybrid infrastructure in place, IT consumers focus on the service catalog instead of infrastructure. Historically, full supporting stacks would be designed from hardware, through operating system and application stack. Workloads in a hybrid environment draw from the service catalog that is provided by IT, which consists of services that are delivered by the hybrid infrastructure.
As an example, all three hybrid infrastructure pattern choices provide virtual machines; but in each case, those virtual machines have different attributes and costs. The consumer will have the choice of which one or which combination to utilize. Some virtual machines might be very low-cost but have limited features available, while others might be higher-cost but support more capability.
The hybrid infrastructure pattern enables customers to utilize private, public, and service provider clouds, each of which utilize the same product and architecture foundations.