Quantcast
Channel: Category Name
Viewing all 5932 articles
Browse latest View live

KMS Activation in Windows Server 2019

$
0
0

Hi! I’m Graeme Bray and you may remember me from previous articles such as KMS Activation for Windows Server 2016.  Today’s installment will coincide with a new Windows Server release.  I’m going to focus on getting you to enable AD Based Activation for those of you who have not yet done so. 

The location for the KMS Host Key is the same as Windows Server 2016.  You need to find the key on the Microsoft Volume License Service Center. 

KMS Activation for Windows Server 2019 can be run from the following Operating Systems with the appropriate prerequisites: 

Windows Server 2012 R2 

July 2016 Servicing Stack Update: KB3173424 

September 11, 2018 Cumulative Update: KB4457129 

*Note* – If you’re reading this after a subsequent Patch Tuesday, the most recent Cumulative Update will include these changes as well.  They were originally introduced in KB4343891. 

Windows Server 2016 

May 2018 Servicing Stack Update: KB4132216 

August 30, 2018 Cumulative Update: KB4343884 

*Note* – You can install any future Windows Server 2016 Cumulative update and get these fixes.  Most Organizations would have installed KB4457131 as part of their patching process.  All fixes for Windows Server 2016 are cumulative. 

Retrieve KMS License Key from the VLSC for Windows Server 2019 

To resolve this problem, follow these steps: 

  1. Log on to the Volume Licensing Service Center (VLSC).
  2. Click License.
  3. Click Relationship Summary.
  4. Click License ID of your current Active License.
  5. After the page loads, click Product Keys.
  6. In the list of keys, locate Windows Srv 2019 DataCtr/Std KMS 

Install the Volume Activation RSAT Tools 

Log into a Windows Server 2012 R2 or Windows Server 2016 Machine 

  1. Install (or verify) that the RSAT Volume Activation Tools are available.
  2. Run Install-WindowsFeature RSAT-VA-Tools
  3. Since you still have PowerShell open, launch Volume Activation Tools by typing vmw.exe
  4. Click <Next> to skip that Welcome screen that everyone dislikes.
  5. Ensure that Active Directory-Based Activation is selected and click <Next>.
  6. Enter your Product Key and put the VLSC Product Name in the Display Name object.  This will help with future validation.
  7. Click <Next> and then <Commit>.  This will put the key into AD, assuming that you have the proper permissions (Enterprise Admin).

I know, you need *what* to enable AD Based Activation?  Stay tuned for a future article (from yours truly) on how to delegate THAT access. 

This is the *only* time that you need to use the CSVLK (KMS Key) to activate a system, at least in this forest. 

Client Licensing 

Now, if you’re like me, you always do a search for “Appendix A KMS” on your favorite search engine (Bing, of course!).  That takes you to the below link which gives you the appropriate Generic Volume License Key (GVLK) that is hardcoded to each OS to activate.  If you download the ISO from the Volume License Service Center, this key is already in the OS and ready to activate. 

https://docs.microsoft.com/en-us/windows-server/get-started/kmsclientkeys  

A couple of caveats as far as AD Based Activation: 

  1. Your systems need to be able to reach the Forest root DC’s if this is in a child domain.
  2. You need to have extended the AD Schema to at least Windows Server 2012.

For more details: Activate Using Active Directory-based Activation 

Windows Server 2019 Activation: https://docs.microsoft.com/windows-server/get-started-19/activation-19  

Now, get going!  Activate Windows Server 2019 in your environment.  Use it in a lab, see what use case scenarios you can find to implement some new features.  You should expect to see more from us on Windows Server 2019 features in the future. 

Thanks! 

Graeme


Notes from the Field: Microsoft SDN Software Load Balancers

$
0
0

Kyle Bisnett and Bill Curtis here. We are two Software Defined Network Blackbelts and Premier Field Engineers at Microsoft and specialize in Hybrid Cloud technologies, which includes Cloud Platform System, Azure Stack and WSSD/SDDC. Most importantly, we ensure it’s easy for our customers and partners to deploy and leverage Software Defined Networking (SDN), whether it’s within an enterprise or as part of a Partner Solution (WSSD).

Recently, our customer came to us asking questions about our SDN Load Balancers (SLB) as they were looking into using fewer physical appliances and deployments of the venerable Microsoft Network Load Balancing (NLB) with an SDN solution. In this blog, we will cover some common questions we received from this customer and others in the field about SDN SLB.

Briefly, what is Microsoft Software Defined Networking?

If you have deployed Windows Server 2016 and/or Windows Server 2019, chances are you’ve heard about Software Defined Networking (SDN) that comes at no additional cost in our Datacenter SKU. Also, if you’ve looked at our prior blogs, you have seen mentions about SDN going mainstream.

Microsoft SDN provides software-based network functions such as virtual networking with switching, routing, firewalling with micro-segmentation, third-party appliances, and of course load balancing – the subject of today’s post . These are all virtualized and highly optimized for availability and performance and, like Storage Spaces Direct, is a key component of the Windows Server Software Defines(WSSD)/Software Defined Datacenter (SDDC).

Why should I use Microsoft’s SDN Software Load Balancer?

…There are plenty of other SDN Load Balancer solutions that have been around for longer, right?

Microsoft SDN is an end-to-end solution. All the components work in harmony together, and you can leverage features that are a direct result of this synchronization, such as Direct Server Return (DSR), health-probing on the Hyper-V hosts, and NAT functionality. Keep in mind, the other benefit is from an administrative perspective as you no longer need worry about expensive support contracts, hardware upgrade cadences (these are just Windows VMs), and some of the odd items like Active/Passive. All SLB MUXs are always Active/Active whether it’s two or eight.

SDN in Server 20162019 is closely based on the SDN running in Microsoft Azure

Software Defined Networking is being utilized across 32 different global Azure datacenters. When you configure a Standard or Basic Load Balancer, Virtual Network (vNet), Site to Site VPN Connections and more in Microsoft Azure, you are using SDN architecture that has been ported over to SDN in Windows Server 20162019, and Azure Stack. Microsoft SDN is well-tested at scale and is very competitive with other SDN products in terms of performance and scalability.

Are the SLB MUXs highly available?

If so, how can I ensure it is checking my Guest VMs to ensure they are ‘up’ or ‘down’?

SLB MUXs are fault tolerant and utilize Border Gateway Protocol (BGP), which is a dynamic routing protocol that advertise all MUXs within the pool in a /32 subnet form to the top-of-rack switch. When a keep-alive metric is missed, BGP automatically removes the individual load balancer from the routing table. This is helpful in host outage or in case of individual MUX monthly patching.

So that’s great! We have fantastic fault tolerance for the MUX infrastructure, but how about our Guest VMs that leverage the SLB MUXs?

Well, we have a feature that is most known in the load balancing community as Health Probing, and our implementation is state-of-the-art. In Windows Server 2016 and above, we support both TCP probe-to- port and HTTP probe-to-port and URL.

Unlike traditional load balancer solutions where the probe originates on the appliance and is sent across the wire to the guest IP, SLB probes will originate on the host where that Guest VM IP is located and is sent directly from the SLB Host Agent running on the Hyper-V Host to the VM IP. This eliminates wire traffic and spreads the overhead of conducting health probes between the Hyper-V hosts within the SDN-enabled cluster.

How much performance can I expect from the load balancers?

Direct Server Return (DSR) is a fantastic feature. In the two scenarios below, you’ll see this in action. For external traffic, DSR can eliminate most of the outbound traffic going through a SLB MUX as it will send directly from the Hyper-V Host to the top-of-rack switchrouter. For internal load balancing, it can eliminate most traffic being received at the load balancer infrastructure and will be strictly VM to VM traffic after the initial packet. Let’s look at these scenarios:

External Load Balancing

For a Public Virtual IP (VIP) load balancing scenario, the initial packet will arrive at our public VIP on the Top of Rack Switch/Router, which will then be routed to one of our SLB MUXs, and then onto the host, and to the individual tenant VM. Now, on the outbound path, egress packet avoids the MUX infrastructure all together since the Hyper-V host has performed NAT on the packet and routed directly to the Top of Rack Switch. This increases available bandwidth for tenant and infrastructure workloads by 50% when compared to other appliances and solutions.

  1. Internet traffic routed to a Public VIP comes in through the Top-of-Rack switchrouter, and, then using ECMP, a SLB MUX VM is chosen in which to route the traffic.
  2. The SLB MUX VM then finds what Dynamic IPs (DIPs – the actual IPs of the VMs) the Public VIP is associated with. One of the DIPs is chosen, the traffic is encapsulated into VXLAN packets, and is then sent to the Hyper-V Host which owns the VM with the chosen DIP.
  3. The Hyper-V Host receives the packets, removes the VXLAN encapsulation, and routes it to the VM.
  4. When the VM sends a response packet, it is intercepted by the Hyper-V Host’s virtual switch, the response packet is re-written with the Public VIP IP, and routed directly to the Top-of-Rack switchrouter bypassing the SLB MUX VMs. This results in massive scalability as DSR eliminates the SLB MUX VM(s) from being a bottleneck for return traffic.

Internal Load Balancing

During the internal load balancing scenario, the initial packet will flow to the internal VIP, the SLB MUX will find the DIPs (guest VMs), encapsulate the packet using VXLAN, and send to the host which removes the encapsulation and forwards to the DIP, i.e. Tenant VM. Now, the best part, all traffic after this initial internal load balancing scenario will avoid the MUX and perform VM to VM traffic until a health event occurs such as a probe failure, etc. This can eliminate a large percentage of internal load balancing traffic.

  1. The first internal VIP request goes through the SLB MUX to pick a DIP.
  2. The SLB MUX detects that the source and destination are on the same VM Network and then the MUX sends a redirect packet to the source host.
  3. The source host then sends subsequent packets for that session direct to the destination. The SLB MUX is bypassed completely!

How do I grant my business units access to a jump box within an isolated vNET?  Could I also grant Internet Access to all of the VMs without using a Gateway Connection?

If you have ever created a virtual machine in Microsoft Azure, you will have a Public IP and a Private IP. The private IP is used for Intra vNet traffic in Azure or can be used for Express Route and/or Site to Site. The public IP, however, is a NAT interface that you can expose RDP 3389 on. SDN has the same functionality to both inbound and outbound NAT. Outbound NAT is especially useful to give all your VMs within a vNet, internet access, but you do not need a Gateway connection for each vNet!

Inbound NAT

Let’s walk through how inbound NAT occurs: NAT will not terminate within the load balancer but on the Hyper-V host itself. When the Public VIP is created and configured, along with an external port, the SLB MUXs will start advertising the VIP by updating the routes using BGP to the Top of Rack switch. When a packet is destined for the Public VIP, it will forward this to an available MUX which will look up the DIPs and encapsulate the packet, using VXLAN to be forwarded to the Hyper-V host. The Hyper-V host will remove the encapsulation and re-write the packet, so the destination is now the DIP and internal port that you wish to use.

A great use of this feature that we see from the field is the “Our infrastructure team wishes to allow a business unit RDP access to multiple VMs inside of the ‘Finance’ vNet.” Within VMM, the infrastructure team can assign separate Public ports, I.e. 3340, 3341, etc. that still have the same back end port of 3389, but to different DIPs. This fulfills the requirement of RDP to a few jump boxes inside the vNet.

Can I use SDN Software Load Balancers on VMs that are not using Hyper-V Network Virtualization?

Yes! In some organizations, the extra configuration required for Hyper-V Network Virtualization (HNV) as well as the need for SDN RAS Gateways for HNV enabled networks to be configured so that VMs can communicate with the physical network can be overkill. Virtual Machines that are not using HNV VM Networks can still take advantage of SDN load balancing.

Microsoft Network Load Balancer can also be used, but it does not come close to providing all the robust features and scalability that SDN SLB provides, as mentioned above.

If the following criteria is met, SDN SLB can be used on non-HNV VMs:

  • Top of Rack Switch is BGP capable
  • Network Controller is deployed
  • Hyper-V Hosts are managed by Network Controller
  • Software Load Balancer MUX VMs have been deployed and onboarded by NC
  • The VM Networks being used by the VMs that require load balancing are on a defined VLAN and are managed by Network Controller

How do I get started evaluating SDN Software Load Balancers?

Deploying SDN has never been easier!  As announced during our Top 10 Network Features series SDN has gone mainstream!

There are two methods for deploying SDN:

SDN Express

SDN Express now includes a GUI (see our SDN Goes Mainstream post)!  You can also deploy via PowerShell for environments not utilizing System Center Virtual Machine Manage (SCVMM). Additional details on how to deploy SDN using SDN Express are located here and scripts and other resources are in the Microsoft SDN repository on GitHub.

System Center Virtual Machine Manager 2016 or higher

SDN can also be deployed and managed by SCVMM 2016 and higher. Instructions for how to deploy SDN in SCVMM are located here and scripts and other resources are in the Microsoft SDN repository on GitHub.

How can Microsoft help my enterprise become part of SDN?

That’s a great question and we are sure glad that our customer asked. There are a few different options listed below:

Premier Advisory Call

Ask your Technical Account Manager (TAM) who is assigned to your account to get you in touch with the Microsoft SDN Blackbelt community. We can hold a remote advisory call to discuss prerequisites and ensure that it will meet the requirements of your business. This is also a great time for a Q & A session!

Premier WorkshopPLUS: Windows Server: Software Defined Networking

This workshop is a full 4-day workshop that walks through planning, architecture, implementation, and operation of an SDN-enabled hybrid cloud. It includes labs that are hosted on our Learn on Demand platform, simply bring-your-own device and you gain access to all the content and labs. Also, coming towards end of this year, our Unified Support customers will have access to all the Blended Learning Unit (BLUs) video recordings we completed. It’s sort of like a Bill and Kyle SDN on-demand channel!

SDN Blackbelt Community

The SDN Blackbelt community is also here to assist remotely. We can certainly have an advisory call as mentioned and that should be your first step. However, if you have a quick question or need assistance, send us a quick note at SDNBlackbelt@microsoft.com and one of us will get back to you.

Summary

We hope you found this blog to be useful and the scenarios beneficial. There are some fantastic features gained from implementing SDN including the battle-tested and performant Software Load Balancing included in your datacenter SKU. Stay tuned for more Notes from the Field and check the tag below for the full series.  We plan to post future blogs that will discuss many other components of SDN!

Stay tuned and see you next time!

Kyle Bisnett and Bill Curtis

Breaking into Windows Server 2019: SDN Load Balancers

$
0
0

Happy Friday folks! Brandon Wilson here once again to give you a pointer to some more information covering a topic touched on by the Windows Core Networking PG, and that is Software Defined Networking (SDN) load balancing in Windows Server 2016 and Windows Server 2019.

Notes from the Field: Microsoft SDN Software Load Balancers

https://blogs.technet.microsoft.com/networking/2018/10/10/notesfromthefield-slb/

Kyle Bisnett and Bill Curtis here from the field and two of the SDN Blackbelts that share knowledge around architecture, implementations, and lessons learned!  We are excited to have wrote this new blog below on our Software Load Balancing Multiplexers (SLB MUXs) that are part of the Software Defined Network (SDN) framework in Windows Server 2016 and Windows Server 2019.

At a high level, Microsoft SDN provides software-based network functions such as virtual networking with switching, routing, datacenter firewall for micro-segmentation, third party appliance support and load balancing.  As mentioned above, in this blog we will look at the SLB MUXs and the feature set that it provides such as Inbound NAT, Outbound NAT, how performant they are, and why it is such an appealing option to our customers! This is a Q & A style blog that customers have asked along the way.”

As always, if you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we’ll see you again soon!

Brandon Wilson

Extending Hardware Inventory for System Center Configuration Manager

$
0
0

Hello everyone, Jonathan Warnken here, and I am a Premier Field Engineer (PFE) for Microsoft. I primarily support Configuration Manager and I have been getting a lot of questions recently on how to collect custom information and include it in the device inventory within Configuration Manager. I wanted to share one way to accomplish this that demonstrates some of the great ways to extend the built-in features. For this post, I am going to show how to capture the information about local machine certificates. I do want to take a moment to thank MVP Sherry Kissinger for this post with the base powershell script used to collect the certificate information.

#Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.#

Now on to the good stuff. PowerShell makes it easy to get information about certificates. Using get-childitem and selecting one certificate we can see all the information available

While you can collect all of this information we are going to limit this down to just the Thumbprint, Subject, Issuer, NotBefore, NotAfter, and FriendlyName. We are also going to add a custom value of ExpiresinDays and ScriptLastRan. Next, we use a PowerShell script to collect the information and publish it to a custom wmi class.

https://github.com/mrbodean/AskPFE/blob/master/ConfigMgr%20Certificate%20Inventory/publish-CertInfo2WMI.ps1

Next create a configuration item that uses the script to publish the certificates in the local machine personal store, the local machine trusted publishers, and the local machine trusted root certificate stores to wmi that will allow the hardware inventory to collect the information.

  1. Download https://github.com/mrbodean/AskPFE/raw/master/ConfigMgr%20Certificate%20Inventory/Inventory%20Machine%20Certificates.cab to c:tempExamples
  2. Navigate to Assets and ComplianceOverviewCompliance SettingsConfiguration Baselines
  3. Click on “Import Configuration Data” (You will find this as a button on the top toolbar or in the context menu when you right click on Configuration Baselines
    1. Select C:tempExamplesInventory Machine Certificates.cab
    2. Click Yes on the warning “The publisher of Inventory Machine Certificates.cab file could not be verified. Are you sure that you want to import this file?”
    3. Click next twice to progress through the wizard and once complete, click close.
  4. You will now see a new sub folder named Custom under Configuration Items (Assets and ComplianceOverviewCompliance SettingsConfiguration ItemsCustom) and a configuration item named “Inventory Machine Certificates” in the Custom folder.
  5. You will also see a Configuration baseline named “Inventory Machine Certificates”
    1. Deploy this baseline to a test collection

The documentation for using configuration items is available at:

https://docs.microsoft.com/en-us/sccm/compliance/deploy-use/configuration-items-for-devices-managed-with-the-client

https://docs.microsoft.com/en-us/sccm/compliance/deploy-use/create-configuration-baselines

https://docs.microsoft.com/en-us/sccm/compliance/deploy-use/deploy-configuration-baselines

https://docs.microsoft.com/en-us/sccm/compliance/deploy-use/monitor-compliance-settings

These steps will extend the Hardware Inventory to collect the certificate information that has been published in WMI. To extend the inventory you must use a MOF file, MOF files are a convenient way to change WMI settings and to transfer WMI objects between computers. For more info see https://technet.microsoft.com/en-us/library/cc180827.aspx

  1. Download https://raw.githubusercontent.com/mrbodean/AskPFE/master/ConfigMgr%20Certificate%20Inventory/CertInfo.mof to c:tempExamples
  2. Create a new Custom Device Client Setting (AdministrationOverviewClient Settings)
    1. Name the setting “Custom HW Inventory” and only enable Hardware Inventory
    2. Select Hardware Inventory on the left just under General
      1. Ensure Enable hardware inventory on clients is set to yes
      2. The default schedule is for 7 days, update the schedule if you would like to change it
      3. Click the “Set Classes …” button
        1. Click on the “Import …” button
          1. Select the c:tempExamplesCertInfo.mof
        2. Once back on the Hardware Inventory Classes dialog ensure the CertInfo (cm_CertInfo) class is enabled
        3. Click Ok
      4. Click Ok (again)
    3. Deploy the “Custom HW Inventory” Client Setting to a test collection.

Once the configuration item runs and publishes the data info WMI, the next time hardware inventory runs for systems in the test collection the certificate information will be available for reporting in Configuration Manager.

These steps will create console query that you can use to search for systems with a specific certificate thumbprint

  1. Download https://raw.githubusercontent.com/mrbodean/AskPFE/master/ConfigMgr%20Certificate%20Inventory/Find_Cert_Query.MOF to c:tempExamples
  2. Navigate to MonitoringOverviewQueries
  3. Click on “Import Objects”, this is available a button on the top toolbar and the context menu when you right click on Queries
    1. Click next to navigate through the wizard
    2. On the MOF File Name step, select c:tempExamplesFind_Cert_Query.MOF
  4. Once the import completes, you will see a query named “Find Machines with a Certificate by thumbprint”

  5. Once you have systems reporting the certificates as part of the inventory you can run this report
    1. When you run this report, it will prompt you for the thumbprint of a certificate to search for
    2. If any systems are found with the certificate the system name and the thumbprint will be returned by the query

This is a SQL query that can be used to view the certificate inventory data and can also be used as the basis for creating a custom report

Select sys.Name0 as ‘Name’, Location0 as ‘Certificate Location’, FriendlyName0 as ‘Friendly Name’, ExpiresinDays0 as ‘Expires in Days’, Issuer0 as Issuer, NotAfter0 as ‘Not After’, NotBefore0 as
‘Not Before’, Subject0 as Subject, Thumbprint0 as Thumbprint, ScriptLastRan0 as ‘Script last Ran’

from v_GS_CM_CERTINFO

Inner Join v_R_System as sys ON v_GS_CM_CERTINFO.ResourceID = sys.ResourceID

 

Thank you for reading, and I hope this helps you out!

File Not Found Exception while configuring Log Analytics

$
0
0

This is an update for our customers who are hitting below exception while configuring Log Analytics workspace from SCOM console. This blog talks about workaround for this issue, please follow the mentioned steps.

Workaround:

–          Copy the Advisor dll (version 1.0.5) from this link to Console folder (C:Program FilesMicrosoft System CenterOperations ManagerConsole).

–          Try to configure your Log Analytics workspace connection and this time it will work.

–          Remove this Advisor dll from Console folder after Log Analytics configuration is done.

[Note]: If you will not delete Microsoft.IdentityModel.Clients.ActiveDirectory  from Console, then you will hit the above exception while adding subscription to Azure MP.

What is causing this exception?

This exception will only occur when you have Azure MP installed on your system.

The reason is both Azure MP and Advisor MP use different versions of Microsoft.IdentityModel.Clients.ActiveDirectory library.

When the intended version of this library isn’t found, Console throws FileNotFound exception

 

 

Use Azure Site Recovery to migrate Windows Server 2008 before End of Support

$
0
0

This blog post was authored by Sue Hartford, Senior Product Marketing Manager, Windows Server.

Dont let the name fool you. Azure Site Recovery (ASR) can be used as an Azure migration tool for 30 days at no charge. It has been used for years to support migration of our 64-bit versions of Windows Server, and we are pleased to announce it now supports migration of Windows Server 2008 32-bit applications to Azure Virtual Machines.

This is good news for organizations that wish to take advantage of the new 2008 End of Support option to get three additional years of Extended Security Updates for free in Azure. The 2008 and 2008 R2 versions of Windows Server will reach End of Support on January 14, 2020. But customers who migrate these versions to Azure Virtual Machines will continue to get free security updates until January 2023. This buys customers more time to upgrade or modernize, while starting to gain the benefits of cloud.

Did you know that if you have Software Assurance, you can use existing Windows Server licenses to save on Azure? In fact, if you are using the Datacenter edition of Windows Server on-premises, you can keep it running on-premises and save on Azure Virtual Machines at the same time. Find more details on our Azure Hybrid Benefit web page.

With the end of support for Windows Server 2008 in January 2020 fast approaching, now is a great time to begin modernizing your applications and infrastructure with the power of Azure. For more information about migration with Azure Site Recovery or other great tools from our partners, check out the Azure Migration Center.

The post Use Azure Site Recovery to migrate Windows Server 2008 before End of Support appeared first on Windows Server Blog.

Leap Seconds for the IT Pro: What you need to know

$
0
0

Hi Everybody – Program Manager Dan Cuomo here to tell you, the IT Pro, everything you need to know about Leap Seconds on Windows. If you saw our recent blog series on the Top 10 Networking Features, you may have already noticed an announcement about Leap Second support included in Windows Server 2019 and Windows 10 October 2018 Update.

Note: If you’re an Application Developer, stay tuned for our future post Leap Seconds for the Application Developer: What you need to know

For most IT Professionals, you may not be concerned about Leap Seconds. However, if you’re a customer with time-sensitive applications or in a regulated industry requiring high accuracy time, a measly little second could hurl you into an auditing and compliance frenzy. Whether you call it a v-team or tiger-team nobody wants to have to write those status reports, After Action Reports, or Root Cause Analysis (or whatever your organization calls them) to explain just what exactly went wrong. A leap second comes and goes quickly, but the effects could last some time.

So in this article, we’ll attempt to explain everything the IT Pro needs to know so you can explain, test, and deploy Windows Server 2019 and Windows 10 October 2018 Update with confidence for your time-sensitive scenarios.

Note: Leap Seconds are only included in Windows Server 2019 and Windows 10 October 2018 Update and later releases so this content is not applicable to operating systems prior to this release.

What are Leap Seconds

Lets first understand what a leap second is. A leap second is an occasional 1-second adjustment to UTC. As the earth’s rotation slows (e.g. tidal forces, earthquakes, hurricanes, etc.) UTC diverges from mean solar time or astronomical time.  Leap seconds are added to keep the difference between UTC and astronomical time to less than 0.9 seconds. Don’t worry, we don’t need to start colonizing new planets (yet 😉).  But still, wish we found out how that jump across galaxies worked out for the Stargate Universe crew…

An organization called the International Earth Rotation and Reference Systems Service (IERS) oversees the announcement of Leap Seconds. They release several bulletins; Bulletin C is released every 6 months to confirm whether there will be a leap second or not.

Note: At the time leap seconds were introduced in 1972, a necessary correction of ten seconds was made to UTC. There have since been 27 leap seconds added to UTC for a total of 37 one-second corrections. Leap seconds are added, on average, every 1.5 yrs (NIST FAQ).

Leap Seconds on Windows Overview

Now let’s talk about some of the high-level principles needed to understand Leap Seconds on Windows.

UTC-Compliant Leap Seconds

If you are in a regulated industry, you must not only implement leap seconds, but you must do so in a UTC-compliant manner. This means that the leap second must be added to the last minute of the UTC day. During this minute, the clock goes from 0 to 60 seconds (for a total of 61 seconds).

Windows Server 2019 and Windows 10 October 2018 Update implements the leap second in a UTC-compliant manner enabling customers to meet the requirements in regulated industries.

Industry experts have gone on record to denounce leap second “smearing” – an alternative approach that carves the leap second into smaller units and inserts them throughout the day. Leap second smearing is not UTC-compliant and as such, Windows does NOT implement leap second smearing.

Built for compatibility

The majority of Windows users will not need Leap Second information; either their workloads do not depend on that high of accuracy or are not under industry regulations. If this description sounds like you, feel free to tweet a link to this blog, might I recommend…

…And feel free to stop reading. While the system (kernel) is tracking leap seconds, they will not affect your every day life as applications are never notified that a leap second is occurring unless an application has specifically “opted-in.”  Applications are, by default, none the wiser unless action is taken.

This is important both for customers who have heterogeneous operating system environments to interoperate seamlessly as they always have prior to this release as well as for application compatibility. Many applications expect seconds to be between 0 and 59. If the application isn’t expecting a 60, apps could fail, cats and dogs living together, mass hysteria!

Previous Leap Seconds

For these same reasons, we do not track prior leap seconds. Our goal is to enable customers needing high accuracy time moving forward. Regulations requiring high accuracy, UTC-compliant time, did not come into affect until relatively recently, and therefore prior leap seconds are not necessary to track. For reference the last leap second prior to the release of leap-second aware Windows was December 31st 2016, that is, at the time of writing, we have not had a leap second since this date. Leap seconds after this date, will be tracked by Windows Server 2019 and Windows 10 October 2018 Update.

What happened to previous leap seconds

There’s a logical question of how previous operating systems treated leap seconds. If previous operating systems didn’t track leap seconds, are they 37 seconds off from UTC?

No, although previous operating systems did not track leap seconds, when they synchronized their time at the next interval, they recognized that they were one-second behind and time was moved forward to match the current UTC time.

A Tale of Two Timelines

“It was the best of times, it was the worst of times…It was the epoch of belief, it was the epoch of incredulity.” Since leap seconds are new in Windows 10 October 2018 Update and Windows Server 2019, prior operating systems will not know about this augmented time scale. As a result, the timelines under the hood of Windows will begin to diverge between these two operating systems as leap seconds occur.

So when the next leap second rolls in, we’ll begin an alternate timeline for Windows 😊

Unless your application is leap second aware, it is unlikely that you will notice this delta. However if you were to view an event log from a leap-second aware system on a machine that is not aware of the leap seconds, the time displayed for the event will be off by the number of leap seconds known by the system (mmc.exe is opted-in by default).

Revert to Prior OS Behavior

As a reminder, applications must opt-in to receiving leap second notifications so leap seconds will not affect any applications by default and is likely unnecessary to modify the default behavior.

However, if you have a heterogenous time-sensitive environment you can revert to the prior operating system behavior and disable leap seconds across the board by adding the following registry key:

HKLM:SYSTEMCurrentControlSetControlLeapSecondInformation

Type: “REG_DWORD”

Name: Enabled

Value: 0 Disables the system-wide setting

Value: 1 Enables the system-wide setting

Next, restart your system.

How Leap Seconds Propagate

Every four years, we have a leap year – this is known and predictable. Leap seconds however, are different in that they are not on a regular cadence. Instead, leap seconds are announced by IERS only 6 months in advance. From there, GPS distributes the leap second notification to time servers and ultimately to Windows systems. So let’s talk about some of the mechanisms in-place to make sure that you get the leap second notification.

Time Server Distribution

The Windows Time service includes a server provider that allows a Windows system to operate as a time server. For example, when you add a domain controller to your forest this domain controller can serve time to other clients on the network through this mechanism. This is not the only method of installing a time server; you can check to see if your system is operating as a time server by using the command (Enabled: 1):

w32tm /query /configuration

The Windows Time server distributes the leap second notification to time clients. As GPS distributes time (and the leap second notification) to the Windows Time server, it will pass that notification onto clients; to be clear, your system doesn’t need to be a domain controller to do this.

Windows Update

But what if your system is when the notification comes? Or more likely what if you re-image your system? You’ll want to make sure that new systems know about the upcoming leap second and if the new system is created after a leap second, you’ll want to make sure that this system is synchronized with the other machines on the network.

To make sure this is possible, we’ll distribute leap second notifications through Windows Update as well. This provides a simple mechanism for reporting (nodes that have the latest updates have the leap second information as well).

Best Practice: The simplest and most effective manner for distributing and verifying leap second information across your environment is through Windows Update.  If you’re on the latest updates, you’ll have the notifications!

Hyper-V VMIC

If you have Hyper-V virtual machines, the Hyper-V virtual machine integration components will also provide leap second notifications to those virtual machines.  If the virtual machine is not one of the leap-second aware operating systems (or later) this will have no affect.

Verify that your system got the leap second

In addition to verifying updates across your system, you can also use the following command to view the leap seconds known by a specific system. In the screenshot below, a positive (+) leap second will be inserted after 23:59:59 on 6/30/2019

w32tm /leapseconds /getstatus /verbose

 

Testing Applications

Applications must be written to consume and process leap seconds – As you’re read a number of times already, we assume that applications are not leap-second aware. You can search every application’s documentation to find out if it’s leap second aware, or if you’re an IT Pro in one of these regulated industries, we anticipate that you will want to test and verify your application or system images for leap seconds.

If you want to manually test and opt-in an application, identify the process name, for example:

Next open the registry editor and navigate to

HKLM:SOFTWAREMicrosoftWindows NTCurrentVersionImage File Execution Options

Add a key which is the same name as the process you want to opt-in to leap seconds. In this example, we’ve opted-in the winword.exe process by creating a Registry Key (folder icon).

Next create a REG_DWORD named GlobalFlag2 with a value of 1.

Now restart the process and insert leap seconds as before then test critical application functionality.

If your application doesn’t support leap seconds, please contact the application owner and tell them to check our future post, Leap Seconds for the Application Developer: What you need to know.

Testing Systems

Instead of testing an individual application one-by-one, you may want to test a holistic system. To do this, open the registry editor and navigate to:

HKLM:SYSTEMControlSet001ControlSession Manager

Next create a REG_DWORD named GlobalFlag2 with a value of 1 as shown here.

Restart the system then insert leap seconds as before and test critical application functionality. Note any application or system events in the event log.

Summary

Most IT Professionals may not need to be concerned about Leap Seconds. However, if you’re a customer in a regulated industries requiring high accuracy time or have time sensitive applications, you need to ensure your systems apply and maintain time accurately through a leap second. Windows Server 2019 and Windows 10 October 2018 Update brings support for, true UTC-compliant leap seconds. To make sure that these are properly implemented on your systems, you should verify your patch management strategy, application compatibility, and more.

Please give this a shot, and of course let us know how it went!

Dan “my leap seconds land on 60” Cuomo

This Blog Is Now Retired

$
0
0

Hello Everyone

We have retired this blog. We are now blogging on a new Configuration Manager blog on the Microsoft TechCommunity located here: https://techcommunity.microsoft.com/t5/Configuration-Manager-Blog/bg-p/ConfigurationManagerBlog

Join us there for informative and best practice posts from the Configuration Manager team.

To get started:

  1. Create your account on the Microsoft TechCommunity
  2. Join the System Center Configuration Manager community
  3. Follow our new blog
  4. Add your contributions and comments to our discussion space

Note: This TechNet blog will remain in place for a period to allow access to historical blog posts but no further updates will be made here.

See you over on the TechCommunity.

-Yvette


Does Disabling User/Computer GPO Settings Make Processing Quicker?

$
0
0

Hi everyone! Graeme Bray with you again today to talk about an age old discussion point. Does Group Policy process quicker if you disable the User/Computer sections of a specific policy?

We’re going to walk through my lab setup, grabbing the policies, comparing them, and then confirming that I actually did disable the policy section.

Without further ado… Continue to how I set up my lab for this test.

Lab Setup

  • Two Domain Controllers, in distinct separate sites, with appropriate subnets for my test server
  • Test server running Windows Server 2012 R2, fully patched (as of September 2018).
    • 1 vCPU (Added: Oct 22, 2018)
    • 1 GB RAM
  • 18 Group Policies configured, some with WMI Filters, others with Group Policy Preferences, none with any specific Client Side Extension organization in mind. Also included is the Microsoft Security Baselines. All are currently configured for “GPO Status” of Enabled.
  • GPSVC Debug Logging turned on for system SERVER12.
    • New-Item -Path ‘HKLM:SOFTWAREMicrosoftWindows NTCurrentVersion’ -Name Diagnostics -ItemType Directory
    • New-ItemProperty -Path ‘HKLM:SOFTWAREMicrosoftWindows NTCurrentVersionDiagnostics’ -Name GPSvcDebugLevel -PropertyType DWord -Value 0x30002 -Force
    • New-Item -Path C:windowsdebugusermode -ItemType Directory | Out-Null

    These three PowerShell commands will create the Registry Key, the Dword Value, and the Folder necessary for the actual log.

Test #1 – All Policies Enabled

After setting up my lab, I ran a GPUpdate /force. I was not updating any policies, so the settings themselves didn’t change. I didn’t have many user settings configured, so I wasn’t too terribly concerned about those. I wanted to focus specifically on the computer policy processing time. This tends to be the longest, due to any number of factors including Security Policies, WMI Filters targeting specific OS versions, and

I did my GPUpdate /force 3 times. The first test, from the beginning of processing at .031 seconds, finished processing Local Group Policy at .640 Seconds.

This seems like a long time. If we adjust the time based on some things that BOTH tests will have to encompass, we can shorten the time from .609 down to something easier to get a median between my 3 tests.

We want to skip to the initial “Checking Access to…” entry. In the section of “Searching for Site Policies” we are doing bandwidth checks and other domain/forest information queries.

On policy GUID 244F038B-8372-494A-AE7D-BBCA51A79273, the reason it is slightly slower is due to a WMI Filter check to see if it is Windows Server 2016.

The total time in the first test to process and get every policy is 0.265 seconds. Using the same methodology for the other two “Fully Enabled” tests, the times came to:

Number Time (seconds)
Test #1 0.265
Test #2 0.25
Test #3 0.172
Average 0.229

Test #2 – All Policies “User Configuration Disabled”

Without going into the same detail, the same methodology was used with all policies having “User Configuration Disabled”. Times are below, with a couple screenshots to prove I’m not making up the data.

Number Time (seconds)
Test #1 0.234
Test #2 0.265
Test #3 0.156
Average 0.218

As you can see, the difference is a grand total of 11 hundredths of a second.

Test #3 – Policies Half and Half (Randomly Chosen)

Finally, I picked half of my policies and disabled the User configuration section. Results are below:

Number Time (seconds)
Test #1 0.297
Test #2 0.25
Test #3 0.203
Average 0.25

But But… How can you prove what you did?

I know, I see it coming… How do I know in your logs that a User section of the policy was disabled?

Great question, you can see details on the Flags when Group Policy Debug Logging is enabled on this MSDN article.

See my screenshot below, with “Found flags of: ##”

Tl;dr:

Flag value 0 means Computer/User Enabled

Flag value 1 means User Disabled

Flag value 2 means Computer Disabled

Flag value 3 means policy Disabled.

Now, the question is, what does this mean? For years we’ve all heard, told, and explained that we should disable parts of a GPO that are not in use, especially for performance reasons. From this (somewhat) statistical approach, you can see that there are no obvious benefits to disabling any specific side of a policy, if not in use. The Group Policy engine still needs to query Active Directory to determine each policy that is linked to the Site, Domain, and OU. It still needs to determine what is in the policy, GP Extension wise, and get all of the information about the policy itself.

What should I do?

This is purely a decision you need to make. Some customers will continue to disable sides of the policy based on management and preference. Others will continue to forget that it exists. The choice is yours to make, but please stop proliferating the notion that disabling User/Computer sections within a GPO improves performance.

For what it’s worth, don’t combine User and Computer policies into the same GPO. Split them out, link them to the appropriate OU’s, and for Pete’s sake, please avoid loopback whenever possible.

Hopefully this article has helped detail reasons why it’s not that important to disable portions of a GPO. The end result is at most, 11 hundredths of a second. Nearly instantaneous and within any margin of error, depending on environment.

Thanks for reading

Graeme

Leap Seconds for the AppDev: What you should know

$
0
0

Author: Travis Luke

Last week my esteemed colleague Dan Cuomo introduced Leap Seconds support for Windows 10 including what you need to know if you’re an IT Pro.

If you’re an application developer, the things you need to know are a little bit different. I’m sure all of you were wondering how your application take advantage of the ‘60’ second?  How can you accurately measure time and time durations during leap seconds?  And how can frameworks and applications that calculate time stay in sync with the Operating System?  So, in this article, I’ll attempt to explain all that and describe some of the details and considerations needed to support leap seconds in your application.

Before we get into the details of what developers should consider, let’s have a brief history of measuring time and the birth of the Leap Second.  As we all know (🙂), the Gregorian calendar has a set standard of measuring intervals for time.

  • 1000 milliseconds per second.
  • 60 seconds per minute
  • 60 seconds per hour
  • 60 minutes in every hour
  • 24 hours in every day
  • Months have a variable (but repeatable pattern) for the number of days in a month, ranging from 28 to 31.  And years have 365 days.

The big exception to that is leap years.  (Almost) every 4 years a day is added to February to create a 366-day year.  What is convenient about all of this is it is very predictable.  We can say with certainty how time will be counted in the coming years, decades, and centuries, down to the millisecond.

However, Leap Seconds are not a predictable event.  An International committee called IERS periodically decides to insert a leap second based on observations of the rotation of the earth.  Every six months there is an announcement about whether a leap second will or will not be added or subtracted. This extra second occurs on either June 30th or December 31st.  The timing of this event occurs at the same time all over the globe, at 23:59:59 UTC.

If a second is added the official clock will switch move in 1000 ms increments from 23:59:59 UTC to 23:59:60 UTC to 00:00:00 UTC.  If a second is subtracted (which has never happened so far) time would move in 1000 ms increments from 23:59:58 UTC to 00:00:00.

When IERS publishes a leap second event this data will arrive to all Windows PCs through a few mechanisms.  It may get this data when it is syncing its time through a NTP server.  By default, windows syncs with an NTP time source, such as time.windows.com every day.  It may also receive this data through Windows Update. When this data arrives, it is stored in the Operating System.  This allows Windows to operate on the knowledge of those events.

Windows uses a structure called FILETIME to record a timestamp. (If you are curious like me, you may wonder why it is called filetime. This is because it was originally used only in the Windows File System to represent the timestamp of a file. This structure is now used throughout the operating system for all timestamp related scenarios). The FILETIME structure represent the number of 100 nanosecond intervals since Jan 1, 1601.  There are several APIs that are available to convert this value into a more readable form.  For example FileTimeToSystemTime will convert the FILETIME into a SYSTEMTIME structure representing the UTC time of that value.  The SYSTEMTIME structure provides a breaks down the year, month, day, hour, minute, and second values.

Starting in Server 2019 and the Windows 10 October update time APIs will now take into account all leap seconds the Operating System is aware of when it translates FILETIME to SystemTime. No change is made to FILETIME. It still represents the number of 100 ns intervals since the start of the epoch. What has changed is the interpretation of that number when it is converted to SYSTEMTIME and back. Here is a list of affected APIs:

  • GetSystemTime
  • GetLocalTime
  • FileTimeToSystemTime
  • FileTimeToLocalTime
  • SystemTimeToFileTime
  • SetSystemTime
  • SetLocalTime

Previous to this release, SYSTEMTIME had valid values for wSecond between 0 and 59.  SYSTEMTIME has now been updated to allow a value of 60, provided the year, month, and day represents day in which a leap second is valid.

 

Here are number of Frequently Asked Questions about developing Leap Second Aware Applications:

How can applications take advantage of the ‘60’ second? 

In order receive the 60 second in the SYSTEMTIME structure a process must explicitly opt-in.  You can have your process do this by calling SetProcessInformation with the ProcessLeapSecondInfo option.

DWORD ErrorCode;
BOOL Success;
PROCESS_LEAP_SECOND_INFO LeapSecondInfo;
ZeroMemory(&LeapSecondInfo, sizeof(LeapSecondInfo));
Success = SetProcessInformation(GetCurrentProcess(),
                                ProcessLeapSecondInfo, 
                                &LeapSecondInfo,
                                sizeof(LeapSecondInfo));
if (!Success) {
                  ErrorCode = GetLastError();
                  fprintf(stderr, "Set Leap Second priority failed: %dn", ErrorCode);
                  goto cleanup;
}

By calling this you are telling the operating system that your application will accept a SYSTEMTIME structure with the values between 0 and 60.  Applications are expected to handle the 60 value in a way that makes sense.  For example, if your application is showing transactions in a list with a time stamp, it will display the timestamp with the 23:59:60 value.   Or if your application is an analog clock it may play a special animation to indicate a leap second is occurring.

Application developers are encouraged to test their application with the process opted in.  We have provided a simple method to the opted in behavior without recompiling your code. Please check our previous blog entry on Leap Seconds for the IT Pro which has a section named “Testing Applications” that provides a method to opt-in through the registry.

You can also see examples of how to insert leap seconds yourself for testing purposes using w32tm.exe.

 

How can application developers ensure their application is Leap Second compatible? 

There is a valid concern that if the SYSTEMTIME structure displayed a Seconds value of 60 that it would break applications that are not leap second aware.  Imagine your application was an analog clock.  It may be assuming that valid values of the seconds is between 0 and 59.  If it is 60 it may crash as it attempts to calculate the angle to draw the second hand.

To address this, by default all processes are in a “compatibly mode” unless they explicitly opt in to receive the ‘60’ second.  In the compatibility mode the second value will be guaranteed to be between 0 and 59.  In the second before a leap second is added, or at the ‘59’ second, the clock will slow down to half speed for two seconds.  This will have the visual effect of the 59 second being twice as long.  During this time the milliseconds values will also be slowed down by ½.  When this 2000 milliseconds is complete the clock will resume incrementing at normal speed.  This has the effect of giving applications the leap second and allowing for all timestamps that occur during the “slowdown” period to be sorted in the correct order they occurred. To reiterate, the above is the default behavior while in compatibility mode.

 

How can you accurately measure time and time durations during leap seconds?

One question that frequently comes up is how do you measure a time duration?  Say you want to add 1 day to an existing time stamp.  Does that mean adding 24 hours, or 1,440 minutes, or 86,400 to a given time stamp?  Or does that mean adding 1 day, regardless of how many seconds that day has (including possible leap seconds).  If what you want is to add 86,400 seconds you can follow the guidance here.  In this case you are taking the FILETIME structure, and moving it forward a specific number of milliseconds to achieve one normal day worth of time.

On the other hand, if you want to increment one day regardless of the seconds in that day then there is another approach you must use. In this case you convert your FILETIME to a SYSTEMTIME structure using FileTimeToSystemTime.  Then add the number of days to the structure.  Then covert it back to a FILETIME using the SystemTimeToFileTime API.  This will allow the operating system to apply the arithmetic to covert the SYSTEMTIME to a FILETIME, while factoring in any known leap seconds.

Care must be taken when FILETIME values are passed between computers.  If a FILETIME value is generated on an older windows PC or from a non-Windows PC then it may not be taking into account Leap Seconds.  If that FILETIME was then converted to a SYSTEMTIME structure on a PC that does take into account leap seconds, then the intended time may be off.  To correct for this a registry key has been provided which disables all leap second logic.  If you set this registry key then all behavior involving SYSTEMTIME and leap seconds is reverted.  If you are passing FILETIME values in a heterogeneous environment, you may consider setting this key. You can find more details about this in here under the subject “Revert to Prior OS Behavior”.

 

How can frameworks and applications that calculate time stay in sync with the Operating System? 

Some frameworks and applications may attempt to calculate time using their own arithmetic.  For example, the .NET Framework has logic in the System.DateTime structure to represent time.  If the implementation of the calculation of time is not handled by the operating system, then the framework may arrive at a different time then the Operating System.  For example, imagine if you called DateTime.Now one month after a leap second occurred.  The framework would call GetSystemTimeAsFileTime to get the FILETIME of the current moment.  It would then store this value inside the structure.  When a user wanted to know the date they may call the .ToString() function.  If this framework was attempting to perform its own arithmetic to turn that time into year, month, day, hour, minute, second value, and didn’t take into account the leap second, then the time it returned would be one second faster than the time the operating system reported. For each leap second that was added to the system the framework would continue to drift forward in time.  To correct this the .NET Framework updated the implementation to call the FileTimeToSystemTime API.  This allows the operating system rather than the framework to account all leap seconds and perform the proper arithmetic.

Applications that rely on 3rd party frameworks should ensure their framework’s implementation on Windows is also calling into the correct APIs to calculate the correct time, or else the application will have the wrong time reported.

 

Does the .NET framework support Leap Seconds?

At the time of this writing the System.DateTime structure does not account for leap seconds. It effectively runs in compatibility mode as described in the above section. In other words, during the moment of a leap second the ‘59’ second will be twice as long. Stay tuned for updates as greater leap second support is added to the .NET Framework.

 

How can I prevent Leap Seconds from occurring?

We have had a lot of discussion about this. We are thinking of organizing a day of everybody who is against leap seconds to run west. This will hopefully have the effect of changing the rotation of the earth and eliminate the need of the leap second.

 

We recommend all developers to make their applications leap second aware. We encourage you to try out the tools we provided to test your applications and choose the approaches that work for your scenarios. We are eager to hear back from the development community their experiences with Leap Seconds.

Thanks for reading,

Travis Luke

Managing risky 3rd party app permissions with Microsoft’s CASB

$
0
0

While the focus on cloud-based services continues to drive modern IT, the cloud is also making it increasingly easy for users to source new cloud applications without IT oversight in their quest for productivity. In most cases this leads to an increase in cloud-based Shadow IT across Software as a Service (SaaS) solutions, Infrastructure as a Service (IaaS), as well as connected 3rd party applications, and exposes organizations to new threats.

 

In this post we will discuss 3rd party app permissions as a specific form of Shadow IT and the threat vector that is created when these are authorized against sanctioned IT applications, using protocols such as Open Authentication (OAuth). Furthermore, we will review recent attacks and outline how Microsoft’s Cloud Access Security Broker (CASB) capabilities can help you gain insights into this specific form of Shadow IT and how to safely adopt OAuth apps in your environment - allowing you to balance security and user productivity.

 

Understanding OAuth

OAuth is a web-based industry standard protocol that enables users to grant web applications access to their accounts and data without sharing their credentials and was originally created for consumer-focused services such as Facebook or Twitter. More recently, the enterprise adoption of OAuth is increasing as a result of the continued adoption of cloud-based solutions in corporate environments, as it allows to simplify login processes across the numerous cloud applications in use.

 

Once a user authorizes an app, an access token is created and provides the application with programmatic access to the user’s corporate data. This process allows the application to take advantage of the assigned permissions until the token is manually revoked. Contrary to common perception, a change in the user’s password or introducing a second factor for authentication afterwards, will have no effect on the app’s access token.

 

Based on data from Microsoft Cloud App Security, we’re seeing a continued increase in the number of authorized 3rd party apps. While on average organizations, regardless of size, have 81 authorized OAuth apps in their environment, some organizations already have more than 250 apps.

 

OAuth apps as a threat vector

While extremely convenient, OAuth introduces a new threat vector to the security of organizations and enables potential back doors into corporate environments when malicious apps are authorized. OAuth was introduced as a more recent form of phishing techniques, where attackers trick users into granting access to rogue applications. Stats show that 4% of people will click on any given phishing campaign[1] with the cost averaging at $1.6 million when an organization is affected by a phishing campaign.[2]

 

stats.png

 OAuth phishing specifically exploits the users’ inability to differentiate legitimate from rogue cloud applications. One of the most prominent attacks by the hacker group Fancy Bear in 2017, was designed to impersonate the Gmail interface and thereby steal user’s access token and gain access to their accounts.

 

 

1.jpgImage 1: Impersonation attack interface by Fancy Bear in 2017 

In this scenario, attackers rebuild web pages to make them look nearly identical to genuine web page that users will believe they are accessing. Unless users closely inspect the web address, they may not realize that they are instead providing permissions to a rogue web application.

 

Unfortunately, users often click “accept” without closely reviewing the details on the permissions they are granting to individual apps - and the more privileged the user, the higher the risk of exposure. This problem is elevated by the fact that IT may have no or little insight into the apps that have been authorized or lack the tools to evaluate the security risk of an application against the productivity benefit that it provides.

 

Safely adopting and managing OAuth apps with Microsoft’s CASB

Microsoft Cloud App Security (MCAS) provides a comprehensive solution with reporting and analytics on the use of Shadow IT, as well as deep investigation and remediation capabilities to limit the risk and exposure for organizations.

 

To address the risk of 3rd party app permission, MCAS enables IT to gain an overview of authorized applications across their cloud services Office 365, Salesforce and GSuite. The capabilities allow them to continuously monitor new app permissions and provides controls to prevent and remediate malicious OAuth apps from gaining access to corporate data.

 

Managing app permissions

Microsoft Cloud App Security app permissions enable you to see which OAuth applications have access to Office 365, G Suite, and Salesforce data, view a full list of permissions that were granted to the app, and which users granted these apps access.

 

picture 2.PNGImage 2: App permission overview dashboard in Microsoft Cloud App Security

To better understand unknown applications, admins have the ability to drill down into the details for each app and analyze them against their permission levels, the community use- which indicates how common the app is in other organizations- and view related user activities that were logged by MCAS.

 

Revoking risky apps and notifying users

Community use and the permission level details help admins decide which apps users are allowed to continue to access and which ones will be revoked.

Once reviewed, admins can easily mark an app as approved in the organization, to indicate that it’s been reviewed and approved for organizational use, while apps considered risky can be marked as “banned”, which will revoke the apps permissions.

 

For continuous monitoring of the OAuth apps connected to your envrionment, you can create permission policies that will notify admins when an OAuth app meets a set of pre-defined criteria. Admins can for example configure to be alerted when new apps that require a high permission level were authorized by a large set of users or privileged user accounts.

To minimize the impact to your organization, these alerts can be configured with governance actions to automatically revoke the permissions of an app that is considered risky.

  

pic5.PNGImage 3: Alert - Detection of a new risky OAuth app

 

OAuth apps are becoming increasingly popular among end users in corporate environments, as well as attackers, that’s why it’s crucial for organizations to continually monitor authorized apps and identify risky apps quickly to limit the impact to your organization.

 

More info and feedback

Learn how to start managing your app permissions with Microsoft Cloud App Security using our technical documentation.

Don’t have Microsoft Cloud App Security? Start a free trial today and take a look at our datasheet for an overview of our key use cases and integrations.

As always, we want to hear from you! If you have suggestions, questions, or comments, please let us know on our Tech Community page.

 

[1] Verizon Data Breach Report, 2018

[2] Enterprise Phishing Resiliency and Defense Report 2017

Temporary Post Used For Theme Detection (b2588145-882a-49ac-bd27-2d494f33d511 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

$
0
0

This is a temporary post that was not deleted. Please delete this manually. (2fdbd9e8-b3eb-4581-875a-827ff6b32f41 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

Check out the latest Microsoft 365 Security solutions blog – Secure File Storage!

$
0
0

This blog explores how Microsoft 365 has simplified and secured the process of sharing files so that employees can easily gather data, expert opinions, edits, and responses—from only the right people in a single document. Read the blog: “Secure File Storage.” @msftsecurity Tweet

 

Deployment Series Blog Image - Pointer Pieces on EMS TC.jpg

Infrastructure + Security: Noteworthy News (October, 2018)

$
0
0

Hi there! Stanislav Belov here, bringing you the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis.

Microsoft Azure
Announcing New Module ‘Az’
In August 2018 we released a new module, ‘Az’ which combines the functionality of the AzureRM and AzureRM.Netcore modules. Az runs on both PowerShell 5.1 and PowerShell Core. ‘Az’ ensures that the PowerShell and PowerShell Core cmdlets for managing Azure resources will always be in sync and up to date. In addition, Az will simplify and regularize the naming of Azure cmdlets, and the organization of Azure modules. Az is intended as a replacement for the AzureRM.Netcore and AzureRM modules. AzureRM will continue to be supported, and important bugs will be fixed, but new development and new Azure capabilities will be shipped only in Az starting December 2018.
Serial console for Azure VMs now generally available
For those new to serial console, you’ll likely recognize this scenario: You’ve made a change to your VM that results in you being unable to connect to your VM through SSH or RDP. In the past, this would have left you pretty helpless. Serial console enables you to interact with your VM directly through the VM’s serial port – in other words, it is independent of the current network state, or as I like to say, it’s “like plugging a keyboard into your VM.” This means that you can debug an otherwise unreachable VM to fix issues like a broken fstab or a misconfigured network interface, without needing to resort to deleting and recreating your VM.
Staying up to date with the Microsoft Azure roadmap (Ignite video)
Cloud services like Azure are evolving faster and unlike any other technology we use today. However, as a technologist, responsible for helping your organization keep up with this pace of change and make sense of it all, it is easy to be overwhelmed. In this session, the Azure Service Operations team shares how we track, manage, and communicate change – so you can stay ahead of new capabilities, changes, and deprecations in Azure.
Managing your IaaS resources in the Microsoft Azure Portal: What’s new in 2018
Azure changes fast, and it can be hard to keep up with the latest updates. Meet the Azure Portal IaaS Experiences team as we share our favorite updates to the Azure Portal for IaaS (Compute, Networking, Storage) resources, and provide your feedback on our ideas for the future.
Azure Active Directory: New features and roadmap (Ignite video)
Get an overview of Azure Active Directory capabilities, demos, and what’s new or coming soon! Hear about the newest features and experiences across identity protection, conditional access, single sign-on, hybrid identity environments, managing partner and customer access, and more.
Announcing password-less login, identity governance, and more for Azure Active Directory
Microsoft is ending the era of passwords! This week we announced that password-less phone sign in to Azure AD accounts via Microsoft Authenticator is now available in public preview. With this capability, your employees with Azure AD accounts can use the Microsoft Authenticator app to replace passwords with a secure multi-factor authentication option that is both convenient and reduces risk.
How Microsoft manages a hybrid infrastructure with Azure (Ignite video)
With over 95% of the Microsoft enterprise IT infrastructure in the cloud, the company is adopting Microsoft Azure monitoring, patching, backup, and security tools to create a customer-focused self-service management environment focused on DevOps and modern engineering principles. Learn from Microsoft Core Services Engineering and Operations (CSEO)—the experts who run the critical products and services that power Microsoft—how it is benefiting from the growing feature set of Azure management tools and is set to deliver a fully automated, self-service management solution that gives the experts visibility over the company’s entire IT environment. The result? Business groups at Microsoft will be able to adapt IT services to best fit their needs.
Windows Server
What’s new in Active Directory Federation Services (AD FS) in Windows Server 2019 (Ignite video)

Active Directory Federation Services (AD FS) continues to be the #1 federation provider to login to Office 365 and has grown to power logins for over 77M users globally! AD FS is also actively used to build modern applications to power the next generation of line-of-business applications that cater to the digital transformation for modern workplaces. Learn about the exciting new and upcoming capabilities in Windows Server 2019 to securely and seamlessly sign-in users from anywhere on a variety of devices. We primarily focus on securing extranet access and enabling logins without passwords, and discuss additional security features to protect password-based logins for extranet access. We focus on new capabilities introduced to support modern applications built using OpenID Connect and OAuth. We also discuss advances made to enable smooth sign-in experiences for end users.

Windows Server 2019: What’s new and what’s next (Ignite video)

Windows Server is a key component in Microsoft’s hybrid and on-premises strategy and in this session, hear what’s new in Windows Server 2019. Join us as we discuss the product roadmap, Semi-Annual Channel, and demo some exciting new features.

Windows Server 2019 deep dive (Ignite video)

Hybrid at its core. Secure by design. With cloud application innovation and hyper-converged infrastructure built into the platform, backed by the world’s most trusted cloud, Azure, Microsoft presents Windows Server 2019.

Windows Server Upgrade Center

Do you need guidance or advice on how to upgrade from one OS to another? What consideration needs to be taken before and after upgrading? When you upgrade a Windows Server in-place, you move from an existing operating system release to a more recent release while staying on the same hardware. Windows Server can be upgraded in-place at least one, and sometimes two versions forward. For example, Windows Server 2012 R2 and Windows Server 2016 can be upgraded in-place to Windows Server 2019.

What’s new in Remote Desktop Services on Windows Server 2019 (Ignite video)

Remote Desktop Services evolved along with Windows Server to become one of the main platforms for providing users centralized access to the applications they need. In this session, learn about the enhancements in Windows Server 2019 and how these combined with the power of Azure to fit your virtualization needs.

Other RDS related Ignite sessions:

New multi-session virtualization capabilities in Windows

Migrate your virtualized client application to Microsoft Azure

Windows Virtual Desktop overview

Windows Client
The value of the Microsoft Managed Desktop

Looking for an in-depth understanding of the new Microsoft Managed Desktop offering? This is the session for you. For the first time, you have a choice to either manage your modern desktops yourself or choose the Microsoft Managed Desktop as the easiest way to delight users and free up IT – providing the best experience for users with the latest technology that is backed by Microsoft.

Deploying Windows 10 in the enterprise using traditional and modern techniques (Ignite video)

With Windows 10, we introduced the concept of Windows as a service to allow companies to remain current with the rapid release of features every six months. The key to embracing this servicing model is to move from a project-based approach to a process-based approach. Learn how to leverage both traditional and modern deployment techniques and tools ranging from System Center Configuration Manager, Microsoft Intune, Windows Update for Business, and Windows Autopilot as part of a hybrid approach to effectively deliver the bits. Learn the how and why behind Windows as a service, but, more importantly, learn which scenarios work best in which situations so that you can optimize your deployment while minimizing user impact.

Ask the experts: Successfully deploying, servicing, and managing Windows 10 (Ignite video)

In this Q&A session, we’ll address your questions and some of the common challenges (perceived or otherwise) across Windows 10 deployment planning from phased rollouts to update management and device management. Cadence too fast? Deployment too challenging? What happened to Semi-Annual Channel (Targeted)? Let tackle these questions and other issues seen in real-world deployment situations.

Microsoft 365 adds modern desktop on Azure

Windows Virtual Desktop is the best virtualized Windows and Office experience delivered on Azure. Windows Virtual Desktop is the only cloud-based service that delivers a multi-user Windows 10 experience, optimized for Office 365 ProPlus, and includes free Windows 7 Extended Security Updates. With Windows Virtual Desktop, you can deploy and scale Windows and Office on Azure in minutes, with built-in security and compliance.

Security
Strengthen your security posture and protect against threats with Azure Security Center

Security Center is built into the Azure platform, making it easy for you start protecting your workloads at scale in just a few steps. Our agent-based approach allows Security Center to continuously monitor and assess your security state across Azure, other clouds and on-premises. It’s helped many customers strengthen and simplify their security monitoring. Security Center gives you instant insight into issues and the flexibility to solve these challenges with integrated first-party or third-party solutions. In just a few clicks, you can have peace of mind knowing Security Center is enabled to help you reduce the complexity involved in security management. On September 26, at Ignite Conference we announced several new capabilities that will help you strengthen your security posture and protect against threats across hybrid environments.

Microsoft Cloud App Security and Windows Defender ATP – better together
Microsoft Cloud App Security now uniquely integrates with Windows Defender Advanced Threat Protection (ATP) to enhance the Discovery of Shadow IT in your organization and extend it beyond your corporate network. Our CASB can now leverage the traffic information collected by the Windows Defender ATP, no matter which network users are accessing cloud apps from. This seamless integration does not require any additional deployment and gives admins a more complete view of cloud app- and services usage in their organization.
How Azure Advanced Threat Protection detects the DCShadow attack
A domain controller shadow DCShadow attack is an attack designed to change directory objects using malicious replication. During this attack, DCShadow impersonates a replicator Domain Controller using administrative rights and starts a replication process, so that changes made on one Domain Controller are synchronized with other Domain Controllers. Given the necessary permissions, attackers attempt to initiate a malicious replication request, allowing them to change Active Directory objects on a genuine Domain Controller to grant persistence in the domain.
Start using Microsoft 365 to accelerate modern compliance
With more than 200 updates from 750 regulatory bodies a day, keeping up to date with all the changes is a tremendous challenge. As privacy regulations, like the General Data Protection Regulations (GDPR), continue to evolve, compliance requirements can seem complex to understand and meet. However, when you store your data in the Microsoft Cloud, achieving compliance becomes a shared responsibility between you and Microsoft. Take the National Institute of Standards and Technology (NIST) 800-53 security control framework as an example—Microsoft helps you take care of 79 percent of the 1,021 controls, and you can focus your efforts on the remaining 21 percent. Additionally, Microsoft provides you with a broad set of security and compliance solutions to more seamlessly implement your controls.
Security baseline for Windows 10 v1809 and Windows Server 2019
We are pleased to announce the draft release of the security configuration baseline settings for Windows 10 version 1809 (a.k.a., “Redstone 5” or “RS5”), and for Windows Server 2019. Please evaluate these proposed baselines and send us your feedback via blog comments below.
Out of sight but not invisible: Defeating fileless malware with behavior monitoring, AMSI, and next-gen AV
As antivirus solutions become better and better at pinpointing malicious files, the natural evolution of malware is to shift to attack chains that use as few files as possible. While fileless techniques used to be employed almost exclusively in sophisticated cyberattacks, they are now becoming widespread in common malware, too. At Microsoft, we actively monitor the security landscape to identify new threat trends and develop solutions that continuously enhance Windows security and mitigate classes of threats. We instrument durable generic detections that are effective against a wide range of threats. Through AMSI, behavior monitoring, memory scanning, and boot sector protection, we can inspect threats even with heavy obfuscation. Machine learning technologies in the cloud allow us to scale these protections against new and emerging threats.
Ensure all your users have strong passwords with Azure Active Directory Password Protection (Ignite video)
One weak password is all a hacker needs to get access to your organization’s resources and data. Come to this session to learn about Azure Active Directory password protection and how we bring cloud-powered protection to ensure strong passwords that are invulnerable to compromise.
A world without passwords (Ignite video)
Learn how the security experts in Microsoft’s Core Services Engineering & Operations team are working to eliminate passwords. This advancement is both more secure and easier for people to use!
Attack discovery and investigation with Azure Advanced Threat Protection (Ignite video)
Azure Advanced Threat Protection is a critical solution for the security operations analyst during and after an incident by providing a real-time attack timeline for forensic analysis and deep investigation into attack methodologies. Join us as we walk you through an attack kill chain and demonstrate the role Azure Advanced Threat Protection plays as part of Microsoft 365 Security.
Become the hunter: Advanced hunting in Windows Defender ATP (Ignite video)
Windows Defender Advanced Threat Protection gives incident responders insights into endpoint activity they’ve always wished they had when incidents occur. In this theater session, learn how to use advanced hunting to gain insights into endpoint data going far beyond just responding to alerts.
Discover what’s new and what’s coming in Office 365 Message Encryption and Azure Information Protection (Ignite video)
Learn about the brand new features and capabilities in Microsoft Azure Information Protection and Office 365 Message Encryption. These solutions help protect you most sensitive and important data, and we continuously invest in providing the most comprehensive set of capabilities.
Vulnerabilities and Updates
Updated version of Windows 10 October 2018 Update released to Windows Insiders

In the beginning of October we paused the rollout of the Windows 10 October 2018 Update (version 1809) for all users as we investigated isolated reports of users missing files after updating. Given the serious nature of any data loss, we took the added precaution of pulling all 1809 media across all channels, including Windows Server 2019 and IoT equivalents. We intentionally start each feature update rollout slowly, closely monitoring feedback before offering the update more broadly. In this case the update was only available to those who manually clicked on “check for updates” in Windows settings. At just two days into the rollout when we paused, the number of customers taking the October 2018 Update was limited. While the reports of actual data loss are few (one one-hundredth of one percent of version 1809 installs), any data loss is serious.

Support Lifecycle
Get ready for Windows Server 2008 and 2008 R2 end of support (Ignite video)

Windows Server 2008 and 2008 R2 were great operating systems at the time, but times have changed. Cyberattacks are commonplace, and you don’t want to get caught running unsupported software. End of support for Windows Server 2008 and 2008 R2 means no more security updates starting on January 14, 2020. Join us for a demo-intensive session to learn about your options for upgrading to the latest OS. Or consider migrating 2008 to Microsoft Azure where you can get three more years of extended security updates at no additional charge.

Extended Security Updates for SQL Server and Windows Server 2008/2008 R2: Frequently Asked Questions (PDF)

On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. That means the end of regular security updates. Don’t let your infrastructure and applications go unprotected. We’re here to help you migrate to current versions for greater security, performance and innovation.

Microsoft Premier Support News
To support cloud platform growth, migrations to Azure IaaS, and evolving hybrid cloud scenarios, Microsoft Services has developed an Onboarding Accelerator – Azure Infrastructure offering. This offering provides customers a scalable framework that uses Azure best practices as a baseline so that customers can build their cloud based infrastructure without having to fear if they planned correctly. Azure Architecture Planning sessions with Microsoft Azure field engineers helps to understand the current and desired states and the key infrastructure components that are needed to run production workloads in Azure. Customers will plan their future state together with Microsoft Azure field engineers. This helps field engineer to understand the customer’s needs and priorities and helps the customer to understand required steps. Microsoft Azure field engineers create documentation outlining the process to migrating toward a current state with Microsoft proven practices.
All it takes is one weak password for a hacker to get access to your corporate resources. Hackers can often guess passwords because regular users are pretty predictable. Often users create easy to remember passwords, and they reuse the same passwords or closely related ones over and over again. Hackers use brute force techniques like password spray attacks to discover and compromise accounts with common passwords. We are pleased to announce the release of the “POP – Azure Active Directory: Password Protection” that helps you eliminate easily guessed passwords from your environment, which can dramatically lower the risk of being compromised by a password spray attack. This service applies to both Azure Active Directory and Active Directory Domain Services (AD DS).
We are pleased to announce the release of WorkshopPLUS – Microsoft Identity Manager: Introduction and Technical Overview. Microsoft Identity Manager (MIM) 2016 builds upon the identity management and user self-service capabilities introduced in Forefront Identity Manager (FIM) 2010/R2 while supporting the latest Microsoft software releases. This 3-day WorkshopPLUS introduces and explains the features and capabilities of MIM 2016. It also provides an overview of the solution scenarios that MIM addresses including user, group, and password management.
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

DSC Resource Kit Release October 2018

$
0
0

We just released the DSC Resource Kit!

This release includes updates to 9 DSC resource modules. In the past 6 weeks, 126 pull requests have been merged and 79 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • ComputerManagementDsc
  • SharePointDsc
  • StorageDsc
  • SqlServerDsc
  • xActiveDirectory
  • xExchange
  • xFailOverCluster
  • xHyper-V
  • xWebAdministration

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

xPSDesiredStateConfiguration is also in the pipeline for a release, but the xArchive resource is failing its tests, so that module is currently on hold and will be released when all the tests are passing once again.

Our latest community call for the DSC Resource Kit was on October 10. A recording will be available on YouTube soon. Join us for the next call at 12PM (Pacific time) on November 21 to ask questions and give feedback about your experience with the DSC Resource Kit.

The next DSC Resource Kit release will be on Wednesday, November 28.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don’t forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

Please see our documentation here for information on the support of these resource modules.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or CHANGELOG.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module Name Version Release Notes
ComputerManagementDsc 6.0.0.0
  • ScheduledTask:
    • Added support for Group Managed Service Accounts, implemented using the ExecuteAsGMSA parameter. Fixes Issue 111
    • Added support to set the Synchronize Across Time Zone option. Fixes Issue 109
  • Added .VSCode settings for applying DSC PSSA rules – fixes Issue 189.
  • BREAKING CHANGE: PowerPlan:
    • Added IsActive Read-Only Property – Fixes Issue 171.
    • InActive power plans are no longer returned with their Name set to null. Now, the name is always returned and the Read-Only property of IsActive is set accordingly.
SharePointDsc 2.6.0.0
  • SPFarm
    • Fixed issue where Central Admin service was not starting for non-english farms
  • SPManagedMetadataServiceApp
    • Added additional content type settings (ContentTypePushdownEnabled & ContentTypeSyndicationEnabled).
    • Fixed issue where Get method would throw an error when the proxy did not exist.
    • Fixed an issue where the resource checks if the proxy exists and if not, it is created.
  • SPSearchContentSource
    • Fixed issue with numerical Content Sources name
    • Fixed issue where the code throws an error when the content source cannot be successfully created
  • SPSearchManagedProperty
    • Added a new resource to support Search Managed Properties
    • Fix for multiple aliases
  • SPSearchResultSource
    • Added a new ScopeUrl parameter to allow for local source creation
  • SPSearchTopology
    • Updated Readme.md to remove some incorrect information
    • Fixed logic to handle the FirstPartitionDirectory in Get-TargetResource
  • SPSelfServiceSiteCreation
    • New resource to manage self-service site creation
  • SPServiceAppSecurity
    • Added local farm token.
    • Fixed issues that prevented the resource to work as expected in many situations.
  • SPSite
    • Added the possibility for creating the default site groups
    • Added the possibility to set AdministrationSiteType
    • Fixed test method that in some cases always would return false
    • Fixed a typo in the values to check for AdministrationSiteType
    • Fixed an access denied issue when creating default site groups when the run as account does not have proper permissions for the site
  • SPTrustedIdentityTokenIssuer
    • Added parameter UseWReplyParameter
  • SPUserProfileServiceApp
    • Fixed issue which was introduced in v2.5 where the service application proxy was not created.
    • Updated resource to grant the InstallAccount permissions to a newly created service application to prevent issues in the Get method.
  • SPUserProfileSyncConnection
    • Fixed issue where empty IncludedOUs and ExcludedOUs would throw an error
  • SPWebAppClientCallableSettings
    • New resource to manage web application client callable settings including proxy libraries.
  • SPWebAppPropertyBag
    • New resource to manage web application property bag
  • SPWebAppSuiteBar
    • Fixed incorrect test method that resulted in this resource to never apply changes.
    • Enable usage of SuiteBarBrandingElementHtml for SharePoint 2016 (only supported if using a SharePoint 2013 masterpage)
SqlServerDsc 12.1.0.0
  • Changes to SqlServerDsc
    • Add support for validating the code with the DSC ResourceKit Script Analyzer rules, both in Visual Studio Code and directly using Invoke-ScriptAnalyzer.
    • Opt-in for common test “Common Tests – Validate Markdown Links”.
    • Updated broken links in README.md and in ExamplesREADME.md
    • Opt-in for common test “Common Tests – Relative Path Length”.
    • Updated the Installation section in the README.md.
    • Updated the Contributing section in the README.md after Style Guideline and Best Practices guidelines has merged into one document.
    • To speed up testing in AppVeyor, unit tests are now run in two containers.
    • Adding the PowerShell script Assert-TestEnvironment.ps1 which must be run prior to running any unit tests locally with Invoke-Pester. Read more in the specific contributing guidelines, under the section Unit Tests.
  • Changes to SqlServerDscHelper
    • Fix style guideline lint errors.
    • Changes to Connect-SQL
      • Adding verbose message in Connect-SQL so it now shows the username that is connecting.
    • Changes to Import-SQLPS
      • Fixed so that when importing SQLPS it imports using the path (and not the .psd1 file).
      • Fixed so that the verbose message correctly shows the name, version and path when importing the module SQLPS (it did show correctly for the SqlServer module).
  • Changes to SqlAg, SqlAGDatabase, and SqlAGReplica examples
    • Included configuration for SqlAlwaysOnService to enable HADR on each node to avoid confusion (issue 1182).
  • Changes to SqlServerDatabaseMail
    • Minor update to Ensure parameter description in the README.md.
  • Changes to Write-ModuleStubFile.ps1
    • Create aliases for cmdlets in the stubbed module which have aliases (issue 1224). Dan Reist (@randomnote1)
    • Use a string builder to build the function stubs.
    • Fixed formatting issues for the function to work with modules other than SqlServer.
  • New DSC resource SqlServerSecureConnection
    • New resource to configure a SQL Server instance for encrypted SQL connections.
  • Changes to SqlAlwaysOnService
    • Updated integration tests to use NetworkingDsc (issue 1129).
  • Changes to SqlServiceAccount
    • Fix unit tests that didn”t mock some of the calls. It no longer fail when a SQL Server installation is not present on the node running the unit test (issue 983).
StorageDsc 4.2.0.0
  • Disk:
    • Added PartitionStyle parameter – Fixes Issue 137.
    • Changed MOF name from MSFT_Disk to MSFTDSC_Disk to remove conflict with Windows built-in CIM class – Fixes Issue 167.
  • Opt-in to Common Tests:
    • Common Tests – Validate Example Files To Be Published
    • Common Tests – Validate Markdown Links
    • Common Tests – Relative Path Length
  • Added .VSCode settings for applying DSC PSSA rules – fixes Issue 168.
  • Disk:
    • Added “defragsvc” service conflict known issue to README.MD – fixes Issue 172.
  • Corrected style violations in StorageDsc.Common module – fixes Issue 153.
  • Corrected style violations in StorageDsc.ResourceHelper module.
xActiveDirectory 2.22.0.0
  • Add PasswordNeverResets parameter to xADUser to facilitate user lifecycle management
  • Update appveyor.yml to use the default template.
  • Added default template files .gitattributes, and .gitignore, and .vscode folder.
  • Added xADForestProperties: New resource to manage User and Principal Name Suffixes for a Forest.
xExchange 1.24.0.0
  • xExchangeHelper.psm1: Renamed common functions to use proper Verb-Noun format. Also addresses many common style issues in functions in the file, as well as in calls to these functions from other files.
  • MSFT_xExchTransportService: Removed functions that were duplicates of helper functions in xExchangeHelper.psm1.
  • Fixes an issue where only objects of type Mailbox can be specified as a Journal Recipient. Now MailUser and MailContact types can be used as well.
  • Update appveyor.yml to use the default template.
  • Added default template files .codecov.yml, .gitattributes, and .gitignore, and .vscode folder.
  • Add Unit Tests for xExchAntiMalwareScanning
  • Add remaining Unit Tests for xExchInstall, and for most common setup functions
  • Added ActionForUnknownFileAndMIMETypes,WSSAccessOnPublicComputersEnabled, WSSAccessOnPrivateComputersEnabled,UNCAccessOnPublicComputersEnabled UNCAccessOnPrivateComputersEnabled and GzipLevel to xExchOwaVirtualDirectory.
  • Added GzipLevel and AdminEnabled to xExchEcpVirtualDirectory.
  • Added OAuthAuthentication to xExchOabVirtualDirectory.
  • Updated readme with the new parameters and removed a bad parameter from xExchOwaVirtualDirectory that did not actually exist.
  • Updated .gitattributes to allow test .pfx files to be saved as binary
  • Added Cumulative Update / Exchange update support to xExchInstall resource.
  • Add remaining Unit Tests for all xExchangeHelper functions that don”t require the loading of Exchange DLL”s.
  • Renamed and moved file Examples/HelperScripts/ExchangeConfigHelper.psm1 to Modules/xExchangeCalculatorHelper.psm1. Renamed functions within the module to conform to proper function naming standards. Added remaining Unit tests for module.
xFailOverCluster 1.11.0.0
  • Changes to xFailOverCluster
    • Update appveyor.yml to use the default template.
    • Added default template files .codecov.yml, .gitattributes, and .gitignore, and .vscode folder.
    • Added FailoverClusters2012.stubs.psm1 from Windows Server 2012 and renamed existing test stub file to FailoverClusters2016.stubs.psm1.
    • Modified Pester Describe blocks to include which version of the FailoverClusters module is being tested.
    • Modified Pester tests to run against 2012 and 2016 stubs in sequence.
  • Changes to xCluster
    • Fixed cluster creation on Windows Server 2012 by checking if the New-Cluster command supports -Force before using it (issue 188).
  • Changes to xClusterQuorum
    • Changed some internal parameter names from the Windows Server 2016 version aliases which are compatible with Windows Server 2012.
  • Changes to xClusterNetwork
    • Fixed Set-TargetResource for Windows Server 2012 by removing call to Update method as it doesn”t exist on this version and updates automatically.
xHyper-V 3.13.0.0
  • MSFT_xVMSwitch:
    • Changed “Id” parameter form read only to optional so the VMSwitch ID can be set on Windows Server 2016. This is important for SDN setups where the VMSwitch ID must remain the same when a Hyper-V host is re-installed.
    • Update appveyor.yml to use the default template.
    • Added default template files .codecov.yml, .gitattributes, and .gitignore, and .vscode folder.
xWebAdministration 2.3.0.0
  • Update appveyor.yml to use the default template.
  • Added default template file .gitattributes, and added default settings for Visual Studio Code.
  • Line endings was fixed in files that was committed with wrong line ending.

How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module’s name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available starting in WMF 5.0) to find modules with DSC Resources:

# To list all modules that tagged as DSCResourceKit
Find-Module -Tag DSCResourceKit 
# To list all DSC resources from all sources 
Find-DscResource

Please note only those modules released by the PowerShell Team are currently considered part of the ‘DSC Resource Kit’ regardless of the presence of the ‘DSC Resource Kit’ tag in the PowerShell Gallery.

To find a specific module, go directly to its URL on the PowerShell Gallery:
http://www.powershellgallery.com/packages/< module name >
For example:
http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module -Name < module name >

For example:

Install-Module -Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

Update-Module

After installing modules, you can discover all DSC resources available to your local system with this command:

Get-DscResource

How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:
https://github.com/PowerShell/< module name >
For example, for the CertificateDsc module, go to:
https://github.com/PowerShell/CertificateDsc.

All DSC modules are also listed as submodules of the DscResources repository in the DscResources folder and the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:
https://github.com/PowerShell/< module name >/issues
For example:
https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Keim
Software Engineer
PowerShell DSC Team
@katiedsc (Twitter)
@kwirkykat (GitHub)


SSH on Windows Server 2019

$
0
0

Hello all from PFE Land! I’m Allen Sudbring, PFE in the Central Region. Today I’m going to talk about the built in SSH server that can be added to Windows Server 2019. With previous versions of server, there was some detailed configuration and installs you needed to do, to get SSH working on a Windows Server. With Windows Server 2019, it has become much easier. Here are the steps to install, configure, and test:

?
 

  1. Open a PowerShell window on the Server you wish to install at:

  2. Run the following command to install the SSH server components:

?
 

Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0

?
 

?
 

  1. The install opens the firewall port and configures the service. Last step is start both SSH services with the following command and set them to automatic:

    ?
     

    Set-Service sshd -StartupType Automatic

    Set-Service ssh-agent -StartupType Automatic

    ?
     

    Start-Service sshd

    Start-Service ssh-agent

    ?
     

    ?
     

    ?
     

  2. Test with SSH client. I used Ubuntu installed on Windows 10 WSL. The format for server on domain to connect is upn of the login account @servername, as in:

?
 

ssh allenadmin@sudbringlab.com@servername

?
 

?
 

For servers in a workgroup, use a local admin account@servername as in:

?
 

ssh AzureVMAdmin@servername

?
 

?
 

  1. After you login, you receive a command prompt where you can proceed with CMD or open PowerShell:

    ?
     

    ?
     

    ?
     

OpenSSH gives you the ability to connect to your windows servers without remote PowerShell and get a full CMD and PowerShell Experience. The ability to connect to Windows machines from Linux with a remote CMD shell is also useful in mixed environments.

?
 

In case you’re asking, you also can do the opposite way, and install PowerShell on Linux and remote to a PowerShell Instance on a Linux Machine with PowerShell Core on a Window Machine, but that is for a later post…

 

Thanks for reading!

  

The new HCI industry record: 13.7 million IOPS with Windows Server 2019 and Intel® Optane™ DC persistent memory

$
0
0

Written by Cosmos Darwin, Senior PM on the Core OS team at Microsoft. Follow him on Twitter @cosmosdarwin.

intel-microsoft-partnership

 

 

Hyper-converged infrastructure is an important shift in datacenter technology. By moving away from proprietary storage arrays to an architecture built on industry-standard interconnects, x86 servers, and local drives, organizations can benefit from the latest cloud technology faster and more affordably than ever before.

Watch this demo from Microsoft Ignite 2018:

Intel® Optane™ DC persistent memory delivers breakthrough storage performance. To go with the fastest hardware, you need the fastest software. Hyper-V and Storage Spaces Direct in Windows Server 2019 are the foundational hypervisor and software-defined storage of the Microsoft Cloud. Purpose-built for efficiency and performance, they’re embedded in the Windows kernel and meticulously optimized. To learn more about hyper-converged infrastructure powered by Windows Server, visit Microsoft.com/HCI.

For details about this demo, including some additional results, read on!

Hardware

iops-hardware

The reference configuration Intel and Microsoft used for this demo.

  • 12 x 2U Intel® S2600WFT server nodes
  • Intel® Turbo Boost ON, Intel® Hyper-Threading ON

Each server node:

  • 384 GiB (12 x 32 GiB) DDR4 2666 memory
  • 2 x 28-core future Intel® Xeon® Scalable processor
  • 5 TB Intel® Optane™ DC persistent memory as cache
  • 32 TB NVMe (4 x 8 TB Intel® DC P4510) as capacity
  • 2 x Mellanox ConnectX-4 25 Gbps

optane-module

Intel® Optane™ DC modules are DDR4 pin compatible but provide native storage persistence.

Software

Windows OS. Every server node runs Windows Server 2019 Datacenter pre-release build 17763, the latest available on September 20, 2018. The power plan is set to High Performance, and all other settings are default, including applying relevant side-channel mitigations. (Specifically, mitigations for Spectre v1 and Meltdown are applied.)

Storage Spaces Direct. Best practice is to create one or two data volumes per server node, so we create 12 volumes with ReFS. Each volume is 8 TiB, for about 100 TiB of total usable storage. Each volume uses three-way mirror resiliency, with allocation delimited to three servers. All other settings, like columns and interleave, are default. To accurately measure IOPS to persistent storage only, the in-memory CSV read cache is disabled.

Hyper-V VMs. Ordinarily we’d create one virtual processor per physical core. For example, with 2 sockets x 28 cores we’d assign up to 56 virtual processors per server node. In this case, to saturate performance took 26 virtual machines x 4 virtual processors each = 104 virtual processors. That’s 312 total Hyper-V Gen 2 VMs across the 12 server nodes. Each VM runs Windows and is assigned 4 GiB of memory.

VHDXs. Every VM is assigned one fixed 40 GiB VHDX where it reads and writes to one 10 GiB test file. For the best performance, every VM runs on the server node that owns the volume where its VHDX file is stored. The total active working set, accounting for three-way mirror resiliency, is 312 x 10 GiB x 3 = 9.36 TiB, which fits comfortably within the Intel® Optane™ DC persistent memory.

Benchmark

There are many ways to measure storage performance, depending on the application. For example, you can measure the rate of data transfer (GB/s) by simply copying files, although this isn’t the best methodology. For databases, you can measure transactions per second (T/s). In virtualization and hyper-converged infrastructure, it’s standard to count storage input/output (I/O) operations per second, or “IOPS” – essentially, the number of reads or writes that virtual machines can perform.

More precisely, we know that Hyper-V virtual machines typically perform random 4 kB block-aligned IO, so that’s our benchmark of choice.

How do you generate 4 kB random IOPS?

  • VM Fleet. We use the open-source VM Fleet tool available on GitHub. VM Fleet makes it easy to orchestrate running DISKSPD, the popular Windows micro-benchmark tool, in hundreds or thousands of Hyper-V virtual machines at once. To saturate performance, we specify 4 threads per file (-t4) with 16 outstanding IOs per thread (-o16). To skip the Windows cache manager, we specify unbuffered IO (-Su). And we specify random (-r) and 4 kB block-aligned (-b4k). We can vary the read/write mix by the -w

In summary, here’s how DISKSPD is being invoked:

.diskspd.exe -d120 -t4 -o16 -Su -r -b4k -w0 [...]

How do you count 4 kB random IOPS?

  • Windows Admin Center. Fortunately, Windows Admin Center makes it easy. The HCI Dashboard features an interactive chart plotting cluster-wide aggregate IOPS, as measured at the CSV filesystem layer in Windows. More detailed reporting is available in the command-line output of DISKSPD and VM Fleet.

windows-admin-center

The HCI Dashboard in Windows Admin Center has charts for IOPS and IO latency.

The other side to storage benchmarking is latency – how long an IO takes to complete. Many storage systems perform better under heavy queuing, which helps maximize parallelism and busy time at every layer of the stack. But there’s a tradeoff: queuing increases latency. For example, if you can do 100 IOPS with sub-millisecond latency, you may be able to achieve 200 IOPS if you accept higher latency. This is good to watch out for – sometimes the largest IOPS benchmark numbers are only possible with latency that would otherwise be unacceptable.

Cluster-wide aggregate IO latency, as measured at the same layer in Windows, is charted on the HCI Dashboard too.

Results

Any storage system that provides fault tolerance necessarily makes distributed copies of writes, which must traverse the network and incurs backend write amplification. For this reason, the absolute largest IOPS benchmark numbers are typically achieved with reads only, especially if the storage system has common-sense optimizations to read from the local copy whenever possible, which Storage Spaces Direct does.

With 100% reads, the cluster delivers 13,798,674 IOPS.

iops-record

Industry-leading HCI benchmark of over 13.7M IOPS, with Windows Server 2019 and Intel® Optane™ DC persistent memory.

If you watch the video closely, what’s even more jaw-dropping is the latency: even at over 13.7 M IOPS, the filesystem in Windows is reporting latency that’s consistently less than 40 µs! (That’s the symbol for microseconds, one-millionths of a second.) This is an order of magnitude faster than what typical all-flash vendors proudly advertise today.

But most applications don’t just read, so we also measured with mixed reads and writes:

With 90% reads and 10% writes, the cluster delivers 9,459,587 IOPS.

In certain scenarios, like data warehouses, throughput (in GB/s) matters more, so we measured that too:

With larger 2 MB block size and sequential IO, the cluster can read 535.86 GB/s!

Here are all the results, with the same 12-server HCI cluster:

Run Parameters Result
Maximize IOPS, all-read 4 kB random, 100% read 13,798,674 IOPS
Maximize IOPS, read/write 4 kB random, 90% read, 10% write 9,459,587 IOPS
Maximize throughput 2 MB sequential, 100% read 535.86 GB/s

Conclusion

Together, Storage Spaces Direct in Windows Server 2019 and Intel® Optane™ DC persistent memory deliver breakthrough performance. This industry-leading HCI benchmark of over 13.7M IOPS, with consistent and extremely low latency, is more than double our previous industry-leading benchmark of 6.7M IOPS. What’s more, this time we needed just 12 server nodes, 25% fewer than two years ago.

iops-gains

More than double our previous record, in just two years, with fewer server nodes.

It’s an exciting time for Storage Spaces Direct. Early next year, the first wave of Windows Server Software-Defined (WSSD) offers with Windows Server 2019 will launch, delivering the latest cloud-inspired innovation to your datacenter, including native support for persistent memory. Intel® Optane™ DC persistent memory comes out next year too – learn more at Intel.com/OptaneDCPersistentMemory.

We’re proud of these results, and we’re already working on what’s next. Hint: even bigger numbers!

Cosmos and the Storage Spaces Direct team at Microsoft,
and the Windows Operating System team at Intel

intel-microsoft-partnership

Microsoft Intune announces device-based subscription for shared resources

$
0
0
The meaning of “devices” has evolved in the modern workplace, with IT expected to support not only corporate PCs and bring-your-own (BYO) devices, but also manage kiosks, shared single-purpose devices, phone-room resources, collaboration devices such as Surface Hub, and even some IoT devices. Microsoft Intune is the most comprehensive unified endpoint management platform to manage and secure this proliferation of endpoints in your organization. We are excited to share a licensing update today that further lowers your total cost of ownership (TCO).
 
Microsoft Intune is pleased to announce a new device-based subscription service that helps organizations manage devices that are not affiliated with specific users. The Intune device SKU is licensed per device per month. 
 
It is worth noting that device-based subscription does not allow you to take advantage of any user-based security and management features, including but not limited to email and calendaring, conditional access, and app protection policies. Device SKU also cannot be used for shared device scenarios where the device is managed through the user(s) on the device. Shared devices that are not affiliated with any user identity can leverage this license, for example, Android Enterprise purpose-build devices and kiosks as well as Windows kiosks. This license may provide compelling value for devices using bulk enrollment methods such as Windows Autopilot, Apple Business Manager or Google zero touch enrolment, when the devices don’t require user affinity and no user targeted features such as user-based enrollment, Intune Company Portal, conditional access, and such. 
 
 

Microsoft Ignite 2018 Clustering Sessions available

$
0
0

For those who attended Microsoft Ignite 2018 in Orlando, Florida, we thank you for making it another huge success.

So much fun was had by all.  We had the privilege of showing you what is new and coming in Windows Server 2019 with 700+ deep dive sessions and over 100+ workshops.

You got the latest insights and skills from technology leaders and practitioners shaping the future of cloud, data, business intelligence, teamwork, and productivity.  As well as immersed yourselves with the latest tools, tech, and experiences that matter, and heard the latest updates and ideas directly from the experts.  There were demos galore throughout all the sessions.

Who can forget the demo showing these unheard-of numbers before now running Windows Server 2019 Storage Spaces Direct and Intel’s Optane DC Persistent Memory:

Or, the storage limit increase to 4 petabytes.  We are not just saying it because it’s a big number, we showed it with the help from our friends at Quanta Cloud Technology, Seagate, and Samsung.

In case you missed Ignite, attended but missed a session, or you wish to view the sessions again, here is the link to all the sessions available for your viewing pleasure both from the Migrosoft Ignite pages as well as YouTube.

To kick it all off, here is Satya Nadella’s keynotes to kick off Microsoft Ignite 2018.

Vision Keynote
Ignite, YouTube
Satya Nadella – Chief Executive Officer of Microsoft

Since this is the Failover Clustering blog, I wanted to call out these sessions specifically to what we are doing in the hyper-converged infrastructure (HCI) space.

BRK2035 – Windows Server 2019: What’s new and what’s next
Ignite, YouTube
Erin Chapple, Vijay Kumar
Windows Server is a key component in Microsoft’s hybrid and on-premises strategy and in this session, hear what’s new in Windows Server 2019. Join us as we discuss the product roadmap, Semi-Annual Channel, and demo some exciting new features.

BRK2241 – Windows Server 2019 deep dive
Ignite, YouTube
Jeff Woolsey
Hybrid at its core. Secure by design. With cloud application innovation and hyper-converged infrastructure built into the platform, backed by the world’s most trusted cloud, Azure, Microsoft presents Windows Server 2019. In this session Jeff Woolsey – Principal Program Manager – dives into the details of what makes Windows Server 2019 an exciting platform for IT pros and developers looking into modernizing their infrastructure and applications.

BRK2232 – Jumpstart your hyper-converged infrastructure deployment with Windows Server
Ignite , YouTube
Elden Christensen, Steven Ekren
The time is now to adopt hyper-converged infrastructure and Storage Spaces Direct. Where to start? This session covers design considerations and best practices, how to choose and procure the best hardware, sizing and planning, deployment, and how to validate your cluster is ready for showtime. Get tips and tricks directly from the experts! Applies to Windows Server 2016 and Windows Server 2019.

BRK2036 – From Hyper-V to hyper-converged infrastructure with Windows Admin Center
Ignite, YouTube
Cosmos Darwin, Daniel Lee
Discover how Windows Admin Center (Formerly Project “Honolulu”) makes it easier than ever to manage and monitor Hyper-V. It’s quick to deploy, there’s no additional license, and it’s built from years of feedback – this is YOUR new dashboard! Ready to go hyper-converged? New features like Storage Spaces Direct and Software-Defined Networking (SDN) are built right in, so you get an integrated, seamless experience ready for the future of the software-defined datacenter.

BRK2231 – Be an IT hero with Storage Spaces Direct in Windows Server 2019
Ignite, YouTube
Cosmos Darwin, Adi Agashe
The virtualization wave of datacenter modernization, consolidation, and savings made you an IT hero. Now, the next big wave is here: Hyper-Converged Infrastructure, powered by software-defined storage! Storage Spaces Direct is purpose-built software-defined storage for Hyper-V. Save money, accelerate IO performance, and simplify your infrastructure, from the datacenter to the edge. This packed technical session covers everything that’s new for Storage Spaces Direct in Windows Server 2019.

BRK2233 – Get ready for Windows Server 2008 and 2008 R2 end of support
Ignite, YouTube
Ned Pyle, Jeff Woolsey, Sue Hartford
Windows Server 2008 and 2008 R2 were great operating systems at the time, but times have changed. Cyberattacks are commonplace, and you don’t want to get caught running unsupported software. End of support for Windows Server 2008 and 2008 R2 means no more security updates starting on January 14, 2020. Join us for a demo-intensive session to learn about your options for upgrading to the latest OS. Or consider migrating 2008 to Microsoft Azure where you can get three more years of extended security updates at no additional charge.

We even had a few of our Microsoft MVP’s jump in and deliver some theater sessions.

THR3127 – Cluster Sets in Windows Server 2019: What is it and why should I use it?
Ignite, YouTube
Carsten Rachfahl, Microsoft MVP
Would you like to have an Azure-like availability set and fault domain across multiple clusters in your private cloud? Do you need to have more than 16 nodes in an hyper-converged infrastructure cluster or want multiple 4-node HCI clusters to behave like one? Then you definitely want to attend this session and learn about Cluster Sets – a new, amazing feature in Windows Server 2019 to solve these problems.

THR2233 – What is the Windows Server Software Defined (WSSD) program and why does it matter?
Ignite, YouTube
Carsten Rachfahl, Microsoft MVP
The Window Server Software Defined (WSSD) program allows vendors to build and offer a tested end-to-end hyper-converged infrastructure solution. After implementing more than 100 Storage Spaces Direct projects, Carsten think this is more important than ever. Why? In this session, learn the reasons, and get help choosing the right solution for you!

THR3137 – The case of the shrinking data: Data Deduplication in Windows Server 2019
Ignite, YouTube
Dave Kawula, Microsoft MVP
One of the most requested features for Storage Spaces Direct was ReFS with Data Deduplication. This feature was released over a year ago, but it was only in the Semi-Annual Release which did not include support for Storage Spaces Direct. The IT community has waited patiently, and the time has finally come with Windows Server 2019. This release has added full support for ReFS Data Deduplication into Storage Spaces Direct. What does this mean for you? How about more than 80% space savings on your VMs, Backups, ISO repositories, all running on Cluster Shared Volumes with Storage Spaces Direct. In this session, learn how to set up, configure, and test Data Deduplication with ReFS based on his years of knowledge working with Microsoft storage.

These are just the tip of the iceberg with the amount of sessions available to you. We hope you enjoy these sessions and had a great time at Ignite 2018 as we sure did.

I leave you now with two other huge announcements.

First, Ignite will be back in Orlando, Florida for Microsoft Ignite 2019. The dates are set for November 4-8, 2019 at the Orange County Convention Center. You can pre-register today!!

Second, Ignite 2018 is hitting the road and going global with “Microsoft Ignite | The Tour“. Join us at the place where developers and tech professionals continue learning alongside experts. Explore the latest developer tools and cloud technologies and learn how to put your skills to work in new areas. Connect with our community to gain practical insights and best practices on the future of cloud development, data, IT, and business intelligence. Join us for two days of community-building and hands-on learning.

We will be heading to places such as:

  • Toronto, Canada
  • Sydney, Australia
  • Berlin, Germany
  • Amsterdam, The Netherlands

And these are just a few of the places we are going. Head to the “Microsoft Ignite | The Tour” page and find the city near you. Oh, and did I mention it is free!!!

Thanks
John Marlin
Senior Program Manager
High Availability and Storage

Follow me on Twitter @JohnMarlin_MSFT

Unsticking Windows Updates That Are Stuck In Their Tracks

$
0
0

Hello everyone, Matt Novitsch (SCCM Premier Field Engineer) with Craig McCarty (Platforms Premier Field Engineer) here to talk to you about a method of unsticking stuck Windows Updates. We have seen this several times with customers and on our own machines where Windows Updates are stuck downloading, installing, or failing to install for a variety of reasons. We found one way that fixes them all, without having too many steps, and can be done by any administrator…

So what do you need to do? Simple:

  1. Stop the BITS and the Windows Update Services
  2. Delete or rename the SoftwareDistribution folder
    1. NOTE: If deleting, it would be a good idea to copy or backup this folder first
  3. Start the BITS and Windows Update Services.
    1. NOTE: You should now see the SoftwareDistribution folder is recreated

This can be done via script if by running the following in an administrative PowerShell console.

<#

Script Disclaimer. The sample scripts provided here are not supported under any Microsoft standard support program or service. All scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose.

#>

#Stop BITS Server

Stop-Service BITS

#Stop Windows Update

Stop-Service wuauserv

#Rename the Software Distribution Folder to .old the folder will be recreated when the services are restarted

Rename-Item -Path “C:WindowsSoftwareDistribution” -NewName “C:WindowsSoftwareDistribution.old”

#Start BITS service

Start-Service BITS

#Start Windows Update

Start-Service wuauserv

 

Once this script is done, restart the endpoint and check for updates again.

If you are still experiencing problems with updates you could have a corrupt or missing system file. To resolve this, you will want to run the Deployment Imaging Servicing and Management (DISM) tool followed by SFC /SCANNOW. You can do this by using the steps below.

  1. Open an elevated command prompt
  2. Type ‘DISM.exe /Online /Cleanup-image /Restorehealth /Source:C:GoodSourceWindows /LimitAccess’ (C:GoodSource should be replaced with a path to a Windows DVD or mounted ISO).
    *This command can take several minutes to run.
    *In cases where Windows Updates are not broken, you would be able to just run ‘DISM.exe /Online /Cleanup-Image /RestoreHealth’.

    In the screenshot above, I used a ISO mounted to D: as my source
  3. When command completes run ‘SFC /SCANNOW’ from the elevated command prompt.

  4. Wait for the verification to show 100% complete. If no errors were detected, then you can close the window and try Windows Update again. If an error was detected and wasn’t automatically repaired, please refer to this article starting at Step 4 in the article.

If you haven’t discovered this method before, hopefully this helps you out of a jam Thanks for reading!

Viewing all 5932 articles
Browse latest View live