Quantcast
Viewing all 5932 articles
Browse latest View live

Windows Transport converges on two Congestion Providers: Cubic and LEDBAT

#LEDBAT @Win10Transports

Why don’t we dive right  in?   What is a Congestion Provider and why do you (the reader) care?

  • What is it? A Congestion Provider is an algorithm that controls the flow of data from a Windows server to a Client.
  • Congestion Provider defined: Because Cubic is for humans and LEDBAT is for unattended bots.

How does that impact me?  In the heart of the Windows kernel there is a networking stack. At the heart of the networking stack there is a layer called Transport and Transport contains a suite of algorithms called Congestion Providers.  the data flow across the network/Internet.

Let’s take a look at the difference between the two. Cubic is optimized for throughput while LEDBAT is optimized for low-latency and non-interference. Now the picture is becoming clear. LEDBAT is for unattended bots (meaning that there is not a person actively waiting for the transaction to complete) because these bots should not interfere with human work and Cubic should be used when there is a person waiting for the transaction to complete.

Let’s take a deeper look at the difference between the two using a specific example. Suppose we have a person doing their work with a web browser and a software update that is being delivered by SCCM. The difference here is clear. The software update should be using LEDBAT so that it does not interfere with the person and the web browser should be using Cubic. With this arrangement the software update will do its work leveraging unused bandwidth and when the person with the web browser wants to use the network the software update will relinquish the network resources. This allows the software updates to proceed without interfering with the person. Use your good judgement. Applications that need to proceed without interfering with people that are working need LEDBAT. Applications that people are using to do work need Cubic.

Image may be NSFW.
Clik here to view.

Figure 1 — Idealized Network Diagram

Let’s look at the idealized network diagram in the figure above. There is a TCP sender on the left and a TCP receiver on the right. The TCP sender sends packets into the network which is modeled by a single queue. Upon receiving a packet, the network devices immediately forward the packet towards its destination. If the device cannot forward the packet immediately (because it is busy forwarding a previously received packet) it will place the packet in a queue. In the figure there are four packets in the queue. If the TCP sender sends another packet at this time, then it will have to wait for the four packets in the queue to be sent. This is called queuing delay and it is what causes the  behavior that irritates people.

Image may be NSFW.
Clik here to view.

Figure 2 — Cubic drives the queue to saturation

Cubic tries to optimize throughput by sending packets faster and faster until one of them is dropped then it slows down and repeats the behavior. Because the sender keeps increasing the sending rate eventually the queue will be full. If a packet arrives when the queue is full then that packet must be dropped. When a packet is dropped (besides retransmitting) Cubic slows the sending rate by half (draining the queue) and repeats the process. The queue repeatedly fills and drains from this process optimizing throughput.

Image may be NSFW.
Clik here to view.

Figure 3 — LEDBAT controls the queuing delay

LEDBAT (shown in the Figure above) tries to optimize throughput just like Cubic by sending packets faster and faster. However, LEDBAT also keeps track of the queuing delay (lagginess ). When the lagginess increases too much LEDBAT slows down and drains the queue. This accomplishes two things. LEDBAT keeps the queuing delay lower and since Cubic drives the queue past LEDBAT’s delay threshold LEDBAT will always defer the network resources to Cubic. In other words, LEDBAT will use all of the network resources unless Cubic is using them.

This makes the perfect combination. Your background tasks such as software updates, backup, etc. can cruise along doing their work while the network is not in use and when a person hops on the network to do their work the LEDBAT tasks will get out of the way. Let’s use our person with their Edge web browser using Cubic and our software updates using LEDBAT as an example again to see how this works.

What makes this combination work so well is that humans and computers are opposite. The person using the web browser clicks on a few things and impatiently waits for the network to deliver them. This needs to be done as quickly as possible because even a few seconds can be painful and frustrating to a person. However, once the human has their content, they spend a great deal of time reading and looking at pictures. During this time, they are consuming their content and not really using the network at all.

This is what makes the combination awesome! Computers don’t get frustrated and they react very quickly. So, we have our software update proceeding nicely at full speed and then along comes a human who is in need of their information immediately, so they hit their Edge web browser clickity, clickity, clack, click, click, clack! The LEDBAT controller operating the software update download notices the increase in queuing delay (remember the figures?) and gets out of the way right away. The person gets their stuff immediately and happily begins consuming the information. While they are doing that the LEDBAT controller notices the unused network resources and downloads some more data. The person decides that they need more stuff and click away at their web browser and so on. The perfect team!

 

So, what are your action items here? If you are running a client just get the latest Windows 10 update and you will have Cubic by default. If you have Windows Server 2019 same thing. Cubic is already the default Congestion Provider. If you are running Windows Server 2016 Cubic is not default, but you can fix that by running Windows Update and this powershell:

This is what the default templates look like:

PS C:Usersdahavey> Get-NetTCPSetting | Select SettingName, CongestionProvider, AutomaticUseCustom
SettingName                    CongestionProvider        AutomaticUseCustom
-----------                    ------------------        ------------------
Automatic
InternetCustom.                CTCP                      Disabled
DatacenterCustom.              DCTCP                     Disabled
Compat                         NewReno                   Disabled
Datacenter                     DCTCP                     Disabled
Internet                       CTCP                      Disabled

We can only change the Custom templates, so we need to make the server use the custom templates. Changing AutomaticUseCustom to Enabled will do this for us:

PS C:Usersdahavey> Set-NetTCPSetting -SettingName InternetCustom -AutomaticUseCustom Enabled
PS C:Usersdahavey> Get-NetTCPSetting | Select SettingName, CongestionProvider, AutomaticUseCustom
SettingName                      CongestionProvider        AutomaticUseCustom
-----------                      ------------------        ------------------
Automatic
InternetCustom                   CTCP                      Enabled
DatacenterCustom                 DCTCP                     Enabled
Compat                           NewReno                   Enabled
Datacenter                       DCTCP                     Enabled
Internet                         CTCP                      Enabled

Hey, they all changed even though I only changed the InternetCustom template! Yes, AutomaticUseCustom is an all or nothing setting.

Now we need to change the templates Congestion Provider to Cubic!

SettingName                     CongestionProvider        AutomaticUseCustom
-----------                     ------------------        ------------------
Automatic
InternetCustom                  CUBIC                     Enabled
DatacenterCustom                CUBIC                     Enabled
Compat                          NewReno                   Enabled
Datacenter                      DCTCP                     Enabled
Internet                        CTCP                      Enabled

And now we are a Cubic server just like WS 2019!

If you want to use LEDBAT then see the instruction (“Try it out!” link) in my LEDBAT blog: Top 10 Networking Features in Windows Server 2019: #9 LEDBAT – Latency Optimized Background Transport

 

Thanks for reading!


Storage Migration Service Log Collector Available

Heya folks, Ned here again. We have put together a log collection script for the Storage Migration Service, if you ever need to troubleshoot or work with MS Support.

https://aka.ms/smslogs

It will grab the right logs then drop them into a zip file. Pretty straightforward, see the readme for instructions. It is an open source project with MIT license, feel free to tinker or fork for your own needs. I will eventually move it to its own github project, but for now it’s under me.

You will, of course, never need this. Image may be NSFW.
Clik here to view.
😀

– Ned “get real” Pyle

 

Alerts in SCOM from Azure Application Insights with Azure Management Pack

To bring Alerts/Performance data from Azure to SCOM, Azure Management pack can be used. Azure Management Pack guide talks in detail about the Azure Management Pack capabilities. Please refer that more details.

This blog will talk about how we can see the Alerts for Application Insights Availability Tests in SCOM console. Let’s start.

 

Install latest Azure Management Pack from https://www.microsoft.com/en-us/download/details.aspx?id=50013.

Import the MP from Operations Console.

Image may be NSFW.
Clik here to view.

Now go to Administration Tab -> Microsoft Azure -> Add Subscription and connect to your Azure subscription with your credentials.

Image may be NSFW.
Clik here to view.

You will see your subscription id listed like above under Subscription ID.

 

Next step is to author management pack template to monitor Azure resources.

From the SCOM Console left pane select Authoring. Now Under Management Pack Templates select  Microsoft Azure Monitoring -> Add Monitoring Wizard and follow below steps.

Image may be NSFW.
Clik here to view.

 

Please ensure to create a New Management Pack for Azure Management Pack.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

We are selecting only Application Insights components and web tests here.

In Application Insights, for a ping test/multi step web tests, users can choose to configure Classic Alerts/Metric Alerts for their tests.

We want to see  both Alertrules (Classic Alerts) and Metricalerts (metric alerts) for our Azure tests, so we select both these below.

Image may be NSFW.
Clik here to view.

Under components (Microsoft.insights), there is a list of metrics available which can be selected to Collect Data in SCOM.

But to collect this data, your application which is hosted in Azure should be instrumented to collect all this metrics. If your application isn’t instrumented t0 collect this metrics in Azure, Azure Management Pack cannot collect this data. In short, Azure MP will only collect/show metrics that you are collecting at in Azure.

Alert If checkbox below is to raise Alerts in SCOM for the Threshold value that is specified in Threshold column. Condition can be changed to Greater Than/Less Than as per requirement.

For example if you want to raise an Alert in SCOM when Receive Response Time is greater than 2 seconds, than we change the Threshold value of Receiving Response Time to 2.

Image may be NSFW.
Clik here to view.

 

After this complete the wizard by hitting Next >.

 

After this is done, depending upon how many resources you have under your subscription, Azure MP will load the resources in SCOM. Usually it’s less than a minute.

Go to Monitoring Tab and under Service State you will find all your Application Insights components and their Health State.

Image may be NSFW.
Clik here to view.

 

If you want to see Health State by the Resource Group, please select Resource Group State. In the below screenshot, all the Resource Groups, under my subscription are listed.

To see all the resources data consolidate at one place, go to Service State Tab.

Image may be NSFW.
Clik here to view.

If you want to see Active alerts for your resource group, then right click your Resource Group Name and select the Alert View.

It will show all the Active Alerts as below. Select the alert and Alert Description would be available under Alert Details.

Image may be NSFW.
Clik here to view.

 

You can select Alert Properties like any other Alert in SCOM for this Alert from Azure resource.

Image may be NSFW.
Clik here to view.

We have improved the Alert Description of Metric Alerts generated by Application Insights Ping test. The new Alert would look like following.

 

Image may be NSFW.
Clik here to view.

 

You can configure metric alerts in Azure Portal for your Availability Ping test like below.

Go to Application Insights -> Availability -> Add Test

Image may be NSFW.
Clik here to view.

Enter the required details and select the highlighted Alert type and Alert Status to create metric alerts. For these alerts, you will see the description as mentioned above.

Image may be NSFW.
Clik here to view.

 

For our customers who have SCOM 2016, they can use HTML5 dashboard for viewing the Alerts and Performance data from Azure in SCOM.

Please leave in comments what are the new features you would like to see in next release of Azure Management Pack.

Thanks,

Neha

Please share your feedback about SCOM at https://systemcenterom.uservoice.com/forums/293064-general-operations-manager-feedback

Express updates for Windows Server 2016 re-enabled for November 2018 update

This blog post was authored byJoel Frauenheim, Principal Program Manager, Windows Servicing and Delivery.

Starting with the November 13, 2018 Update Tuesday, Windows will again publish Express updates for Windows Server 2016. Express updates for Windows Server 2016 stopped in mid-2017 after a significant issue was found that kept the updates from installing correctly. While the issue was fixed in November 2017, the update team took a conservative approach to publishing the Express packages to ensure most customers would have the November 14, 2017 update (KB 4048953) installed on their server environments and not be impacted by the issue.

System administrators for WSUS and System Center Configuration Manager (SCCM) need to be aware that in November 2018 they will once again see two packages for the Windows Server 2016 update: a Full update and an Express update. System Administrators who want to use Express for their server environments need to confirm that the device has taken a full update since November 14, 2017 (KB 4048953) to ensure the Express update installs correctly. Any device which has not been updated since the November 14, 2017 (KB 4048953) will see repeated failures that consume bandwidth and CPU resources in an infinite loop if the Express update is attempted. Remediation for that state would be for the system administrator to stop pushing the Express update and push a recent Full update to stop the failure loop.

With the November 13, 2018 Express update customers will see an immediate reduction of package size between their Management system and the Windows Server 2016 end points.

The post Express updates for Windows Server 2016 re-enabled for November 2018 update appeared first on Windows Server Blog.

Windows Server 2019 Now Available

Introduction

Windows Server 2019 is once again generally available. You can pull the new Windows Server 2019 images—including the new ‘Windows’ base image—via:

docker pull mcr.microsoft.com/windows/servercore:1809
docker pull mcr.microsoft.com/windows/nanoserver:1809
docker pull mcr.microsoft.com/windows:1809

Just like the Windows Server 2016 release, the Windows Server Core container image is the only Windows base image in our Long-Term Servicing Channel. For this image we also have an ‘ltsc2019’ tag available to use:

docker pull mcr.microsoft.com/windows/servercore:ltsc2019

The Nanoserver and Windows base images continue to be Semi-Annual Channel releases only.

MCR is the De Facto container source

You can now pull any Windows base image:tag combination from the MCR (Microsoft Container Registry). Whether you’re using a container based on the Windows Server 2016 release, version 1709, version 1803 or any tag in between, you should change your container pull references to the MCR source. Example:

#Here’s the old string for pulling a container
docker pull microsoft/windowsservercore:ltsc2016
docker pull microsoft/nanoserver:1709

#Change the string to the new syntax and use the same tag
docker pull mcr.microsoft.com/windows/servercore:ltsc2016
docker pull mcr.microsoft.com/windows/nanoserver:1709

Or, update your dockerfiles to reference the new image location:

#Here’s the old string to specify the base image
FROM microsoft/windowsservercore:ltsc2016

#Here’s the new, recommend string to specify your base image. Use whichever tag you’d like
FROM mcr.microsoft.com/windows/servercore:ltsc2016

We want to emphasize the MCR is not the place to browse for container images; it’s where you pull images from. Docker Hub continues to be the preferred medium for container image discovery. Steve Lasker’s blog post does a great job outlining the unique value proposition the MCR will bring for our customers.

The Windows Server 2019 VM images for the Azure gallery will be rolling out within the next few days and will come packaged with the most up-to-date Windows Server 2019 container images.

Deprecating the ‘latest’ tag

We are deprecating the ‘latest’ tag across all our Windows base images to encourage better container practices. At the beginning of the 2019 calendar year, we will no longer publish the tag; We’ll yank it from the available tags list.

We strongly encourage you to instead declare the specific container tag you’d like to run in production. The ‘latest’ tag is the opposite of specific; it doesn’t tell the user anything about what version the container actually is apart from the image name. You can read more about version compatibility and selecting the appropriate tag on our container docs.

Conclusion

For more information, please visit our container docs at aka.ms/containers. What other topics & content would you like to see written about containers? Let us know in the comments below or send me a tweet.

Cheers,

Craig Wilhite (@CraigWilhite)

Update on Windows Server 2019 availability

This blog post was authored by Vinicius Apolinario, Senior Technical Product Manager, Windows Server.

On October 2, 2018, we announced the availability of Windows Server 2019 and Windows Server, version 1809. Later that week, we paused the rollout of these new releases to investigate isolated reports of users missing files after updating to the latest Windows 10 feature update. We take any case of data loss seriously, so we proactively removed all related media from our channels as we started investigation of the reports and have now fixed all known related issues.

In addition to extensive internal validation, we have taken time to closely monitor feedback and diagnostic data from our Windows Insiders and from millions of devices on the Windows 10 October 2018 Update. There is no further evidence of data loss. Based on this data, today we are beginning the re-release of Windows Server 2019, Windows Server, version 1809, and the related versions of Windows 10.

Customers with a valid license of Windows Server 2019 and Windows Server, version 1809 can download the media from the Volume Licensing Service Center (VLSC). Azure customers will see the Windows Server 2019 image available in the Azure Marketplace over the coming week. We are also working to make the Windows Server 2019 evaluation available on the Microsoft Eval Center. We will provide an update to this blog and our social channels once its available.

November 13, 2018 marks the revised start of the servicing timeline for both the Long-Term Servicing Channel and the Semi-Annual Channel. For more information please visit the Support Lifecycle page.

The post Update on Windows Server 2019 availability appeared first on Windows Server Blog.

Managing Windows containers with Red Hat OpenShift Container Platform 3.11

Who is the new guy blogging?

Before getting into the topic, I wanted to introduce myself, .
My name is Mike Kostersitz, I am Principal Program Manager and just joined the core networking team in the Cloud and AI organization. I will be focusing on expanding the Windows container networking ecosystem, work with partners to bring their solutions to Windows and bridge the gap between Linux and Windows systems in the container space.
Enough of the intro. Let’s jump in.

What is Red Hat OpenShift

In short  OpenShift enables build and deployment automation as well as continuous integration and continuous delivery for container systems built on kubernetes. At a high level OpenShift is an open source container management platform which sits on top of the Kubernetes Container orchestration system and the a container runtime (see here for the supported runtimes).

My colleague David Schott has covered the current state of Kubernetes support for Windows Microsoft is working with the RedHat OpenShift team to enable management of mixed clusters using the OpenShift platform toolset and allow for deployment of mixed clusters using OpenShift.

OpenShift for Windows will enable managing Windows server 2019 Nodes and containers in a mixed Linux and Windows OpenShift deployment.
The solution will be run on RedHat Enterprise Linux 7.x and use for now Windows Server 1803 worker nodes. The nodes can be physical or virtual.

In the simple example belo , we deploy two virtual machines, one to run the Red Hat OpenShift master node and one to be the worker node for the Windows Containers running Windows Server Core.
To enable network connectivity the solution will use the Cloudbase OVN/OVS CNI plugin and to allow seamless setup the cluster will require a DNS and a DHCP server. The DHCP server will be used to assign IP addresses to the Windows worker node and all pods in the system.
(Note to self: Don’t deploy a DHCP server in your corporate network. Bad things might happen, such as everyone in the local area getting a non-routable IP address from your server blocking internet and corporate resource access.)

  • The high-level deployment of the components looks like the below diagram.
  • The master node currently runs Red Hat OpenShift version 3.11 on top of RHEL 7.5.
  • The worker node runs Windows Server Core, currently 1803 but soon on Windows Server 2019.
  • Both use the Cloudbase Solutions developed OVS/OVN plug in for networking. We are working on adding other CNI plug ins before release.
  • The networking mode is setup as an overlay network but will support other modes too.

Image may be NSFW.
Clik here to view.
High level diagram of an OpenShift for Windows 2 node cluster deployment

OpenShift for Windows example deployment

Summary

OpenShift support for Windows is coming and will provide the build, deployment and CI/CD capabilities of OpenShift on Linux to Windows Server.
While we have not set a final release date yet for OpenShift on Windows we are working closely with the OpenShift team at RedHat and are looking forward to release a preview of what is to come sometime in the first half of next year.

Stay tuned for more on this topic as things develop.

Thanks for reading this far and keep an eye out for our next post on OpenShift for Windows

Announcing General Availability of the Windows Compatibility Module 1.0.0

The Windows Compatibility module (WindowsCompatibility) is a PowerShell module that lets PowerShell Core 6 scripts access Windows PowerShell modules that are not yet natively available on PowerShell Core. (Note: the list of unavailable commands is getting smaller with each new release of PowerShell Core. This module is just for things aren’t natively supported yet.)

You can install the module from the PowerShell Gallery using the command

Install-Module WindowsCompatibility

and the source code is available on GitHub. (This is where you should open issues or make suggestions.)

Once you have WindowsCompatibility installed, you can start using it. The first thing you might want to run is Get-WinModule which will show you the list of available modules. From that list, choose a module, say PKI and and load it. To do this, run the following command:

Import-WinModule PKI

and you’ll have the commands exported by the PKI module in your local session. You can run them just like any other command. For example:

New-SelfSignedCertificate -DnsName localhost

As always, you can see what a module exported by doing:

Get-Command -module PKI

just like any other module.

These are the most important commands but the WindowsCompatibility module provides some others:

  • Invoke-WinCommand allows you to invokes a one-time command in the compatibility session.
  • Add-WinFunction allows you to define new functions that operate implicitly in the compatibility session.
  • Compare-WinModule lets you compare what you have against what’s available.
  • Copy-WinModule will let you copy Window PowerShell modules that are known to work in PowerShell 6 to the PowerShell 6 command path.
  • Initialize-WinSession gives you more control on where and how the compatibility session is created. For example. it will allow you to place the compatibility session on another machine.

(See the module’s command help for more details and examples on how to use the WindowsCompatibility functions.)

How It Works

The WindowsCompatibility module takes advantage of the ‘Implicit Remoting‘ feature that has been available in PowerShell since version 2. Implicit remoting works by retrieving command metadata from a remote session and synthesizing proxy functions in the local session. When you call one of these proxy function, it takes all of the parameters passed to it and forwards them to the real command in the “remote” session. Wait a minute you may be thinking – what does remoting have to do with the WindowsCompatibility module? WindowsCompatibility automatically creates and manages a ‘local remote’ session, called the ‘compatibility session’ that runs with Windows PowerShell on the local machine. It imports the specified module and then creates local proxy functions for all of commands defined in that module.

OK – what about modules that exist in both Windows PowerShell and PowerShell core? Yes – you can import them. After all, there are still a fair number of base cmdlets that aren’t available in PowerShell core yet.

So how does this work? WindowsCompatibility is very careful to not overwrite native PowerShell core commands. It only imports the ones that are available with Windows PowerShell but not with PowerShell Core. For example, the following will import the PowerShell default management module

 Import-WinModule  Microsoft.PowerShell.Management

which contains, among others, the Get-EventLog cmdlet. None of the native PowerShell Core cmdlets get overwritten but now you have Get-EventLog available in your session.

At this point, if you call Get-Module, you will see something a bit strange:

Get-Module | ForEach-Object Name

results in output that looks like:

Microsoft.PowerShell.Management
Microsoft.PowerShell.Management.WinModule
Microsoft.PowerShell.Utility
NetTCPIP

Import-WinModule renames the compatibility module at load time to prevent collisions with identically named modules. This is so the module qualified commands will resolve against the current module. In fact, if you want to see what additional commands were imported, you can run:

Get-Command -Module  Microsoft.PowerShell.Management.WinModule

Limitations

Because WindowsCompatibility is based on implicit remoting, there are a number of significant limitations on the cmdlets imported by the module. First, because everything is done using the remoting protocol, the imported cmdlets will return deserialized objects that only contain properties. Much of the time, this won’t matter because the parameter binder binds by property name rather than by object type. As long as the required properties are present on the object, it doesn’t matter what type the object actually is. There are, however, cases where the cmdlet actually requires that the object be of a specific type or that it have methods. WindowsCompatibility won’t work for these cmdlets.

Windows Forms and other graphical tools

The remoting session is considered non-interactive so graphical tools such as notepad or Winforms scripts will either fail, or worse hang.

Linux and Mac support

This module depends on WinRM and the client libraries on these platforms are known to be unstable and limited. So for this release, only PowerShell Core running on Windows is supported. (This may change in the future. But you’ll still need a Windows machine with Windows PowerShell to host the compatibility session.)

PowerShell 6.1 Dependency

WindowsCompatibility depends on a feature introduced in PowerShell Core 6.1 for keeping the current working directory in both the local and compatibility sessions synchronized. Earlier versions of PowerShell will work with WindowsCompatibility but won’t have this directory synchronization feature. So if you’re running PowerShell Core 6.0, import a command that writes to files, do Set-Location to a new directory, then use that command to write to a file with an unqualified path; it will use the original path from when the module was imported rather than your sessions current working directory. On PowerShell Core 6.1, it will correctly use the current working directory.

Summary

To sum it all up, the WindowsCompatibility module provides a set of commands that allow you to access Window PowerShell modules from PowerShell Core 6. There are however, some limitations that make it unsuitable for all scenarios. Over time, as more and more modules are ported to .NET Core/PowerShell 6 natively there will be less need for this module.

Cheers!
Bruce Payette,
PowerShell Team.


MCAS brings its real-time CASB controls to on-prem apps!

Managing hybrid IT environments is a reality for most organizations today. Forbes is predicting that by 2020 on-premises workloads will still account for 27% of all enterprise workloads. Consequently, and despite the rapid move to the cloud, we can expect that critical workloads will continue to be managed in hybrid environments for years to come. Across these hybrid deployments, you are tasked with providing a simple and integrated experience for your users, while securing the confidential data that’s stored in your organization’s apps and resources.

 

Microsoft Cloud App Security now natively integrates with Azure AD Application Proxy to enable organizations to enforce real-time controls for any on-premises app and ensure a consistent security experience across hybrid cloud workloads - delivering on a capability that is unique in the market of Cloud Access Security Brokers (CASBs).

 

Azure AD Application Proxy provides single sign-on and secure remote access for web apps that are hosted on-premises. These on-prem web apps can be integrated with Azure AD to give end users the ability to access them in the same way they access Office 365 and other SaaS apps. Conditional Access App Control provides real-time controls for your organization’s apps, to allow for powerful use-cases such as controlling downloads, monitoring low-trust sessions, creating read-only modes, and more.

 

By integrating these two capabilities, we’re ensuring that your apps and services are protected in a consistent manner, regardless of where they are hosted. For example, if you use an app on-prem that enables file-sharing and -collaboration, you can publish this app via the Azure AD App Proxy to enable your users to access their files from anywhere, at any time. Configuring the app with Conditional Access App Control allows you to limit what a user can do, e.g downloading files, when a user session is considered risky, such as when the app is accessed from an unmanaged device.

 

As you migrate to the cloud and adopt cloud-based file-collaboration tools such as OneDrive or Dropbox, you can continue to utilize the same download policy to ensure the end-user experience, as well as the security you’ve come to expect, remain unchanged. This is just one scenario of many, across any application, that allows you to achieve this continuity, convenience, and powerful security.

 

More info and feedback

Learn how to get started with Microsoft Cloud App Security with our detailed technical documentation. Don’t have Microsoft Cloud App Security? Start a free trial today!

 

As always, we want to hear from you! If you have any suggestions, questions, or comments, please visit us on our Tech Community page.

 

To learn how you can provide simple, secure, and cost-effective remote access with Azure AD Application Proxy check out our getting started guide.

Intune’s Journey to a Highly Scalable Globally Distributed Cloud Service

Earlier this year, I published the 1st blog post in a 4-part series that examines Intune’s journey to become a global, scalable cloud service.  Today, in Part 2, I’ll explain the three proactive actions we took to prepare for immediate future growth. The key things we learned along the way are summarized at the end. 

 

While this blog primarily discusses engineering learnings, if you are an Intune administrator, I hope this blog gives you an added level of confidence on the service that you depend on every day; there is extraordinary amount of dedication and thought that go into building, operating, scaling and most importantly continuously improving Intune as a service.  I hope some of the learnings in this blog are also applicable to you, we certainly learned a ton over the years on the importance of data driven analysis and planning.

 

To quickly recap Part 1 in this series, the four key things we learned from re-building Intune were:

  1. Make telemetry and alerting one of the most critical parts of your design – and continue to refine the telemetry and alerting after the feature is in production.
  2. Know your dependencies. If your scale solution doesn’t align to your dependent platform, all bets are off.
  3. Continually validate your assumptions. Many cloud services/platforms are still evolving, and assumptions from 1 month ago may no longer be valid.
  4. Make it a priority to do capacity This is the difference between being reactive and proactive for scale issues.

With all of that in mind, here (in chronological order) are the actions we took based on what we learned:

Action #1:  Fostering a Data-driven Culture

Deciding to make our culture and decision-making ethos entirely data-driven was our absolute top priority.  When we realized that the data and telemetry available to us could be core parts of engineering the Intune services, the decision was obvious.  But we went further by making the use of data a fundamental part of every person’s job and every step we took with the product.

 

To entrench data-driven thinking into our teams, we took a couple different approaches:

  • Moneyball training
  • Repeated emphasis in daily standups and data examinations.
  • Instituting weekly and monthly post-mortem reviews as well as Intune-wide service, SLA, and incident reviews.

 

In other words:  We took every opportunity, in any incident or meeting, to emphasize data usage – and we kept doing it until the culture shift to a hypothesis-driven engineering mindset became a natural part of our behavior.  Once we had this, every feature that we built had telemetry and alerting in place and it was verified in our pre-production environments before releasing to customers in production.

 

Now, every time we found a gap in telemetry and/or alerting in production, we could make it a high priority to track and fix the gaps.  This continues to be a core part of our culture today.

 

The result of this change was dramatic and measurable.  For example, before this culture change, we didn’t have access to (nor did we track) telemetry on how many customer incidents we could detect via our internal telemetry and alerting mechanism.  Now, a majority of our core customer scenarios are detected by internal telemetry, and our goal is to get this to > 90%.

 

Action #2:  Capacity Planning

Having predictive capacity analysis within Intune was something we simply could not live without.  We had to have a way to take proactive actions by anticipating potential scale limits much earlier than they actually happened.  To do this, we invested in predictive models by analyzing all our scenarios, their traffic and call patterns, and their resource consumptions.

 

The modeling was a fairly complex and automated process, but here it is at a high level:

  • This model resulted in what we called workload units.
  • A workload unit is defined as a resource-consuming operation.
    • For example, a user login may equal to 1 workload unit while a device login may equal to 4 workload units – i.e., 4 users consume similar number of resources as 1 device.
  • A resource is defined by looking at a variety of metrics:
    • CPU cores
    • Memory
    • Network
    • Storage
    • Disk space
    • And evaluating the most limiting resource(s).
  • Typically, this turns out to be CPU and/or memory.
  • Using the workload definitions, we generated capacity in terms of workload units.
    • For example, if 1 user consumed 0.001% of CPU, 1 CPU core would equate to a capacity of 100,000 user workload units.
    • That is, we can support a max of 100,000 users or 25,000 devices (since 4 users == 1 device) or combinations of them with 1 CPU core.
  • We then compute the total capacity (i.e., max workloads) of the cluster based on the number of nodes in the cluster.

 

Once we had defined the capacities and workloads units, we could easily chart the maximum workload units we could support, the existing usage, and be alerted anytime the threshold exceeded a pre-defined percentage so that we could take proactive steps.

 

Initially, our thresholds were 45% of capacity as “red” line, and 30% as “orange” line to account for any errors in our models.  We also chose a preference toward over-provisioning rather than over-optimizing for perf and scale. A snapshot of such a chart is included below in Figure 1. The blue bars represent our maximum capacity, the black lines represent our current workloads, and the orange and red lines represent their respective thresholds. Each blue bar represents one ASF cluster (refer to  first blog on ASF). Over time, once we verified our models, we increased our thresholds significantly higher.

 

Figure 1: Intune’s Predictive Capacity Model

Image may be NSFW.
Clik here to view.
1.png

 

Action #3:  A Re-Architecture Resulting from Capacity Prediction

The results of the capacity modeling and prediction we designed turned out to be a major eye-opener. As you can see in Figure 1, we were above the “orange” line for many of our clusters, and this indicated that we needed to take some actions. From this data (and upon further analyses of our services, cluster, and a variety of metrics), we drew the following very valuable three insights:

  • Our limiting factor was the node as well as cluster This meant we had to scale up and out.
  • Our architecture with stateful in-memory services required persistence so that secondary replicas could rebuild from on-node disk states rather than performing a full copy state transfer every time the secondary replica started (g. such as process, node restarts, etc).
  • Our messaging (pub/sub) architecture needed to be replaced from a home-grown solution with Azure Event Hubs so that we could leverage the platform that satisfied our needs of high throughput and scale.

 

We quickly realized that even though we could scale out, we could not scale our nodes up from the existing SKUs because we were running on pinned clusters. In other words, it was not possible to upgrade these nodes to a higher and more powerful D15 Azure SKU (running 3x CPU cores, 2.5x memory, SSDs, etc).  As noted in Learning #2 above, learning that that an in-place upgrade of the cluster with higher SKU was not possible was a big lesson for us.  As a result, we had to stand up an entirely new cluster with the new nodes – and, since all our data was in-memory, this meant that we needed to perform a data migration from the existing cluster to the new cluster.

 

This type of data migration from one cluster to another cluster was not something we had ever practiced before, and it required us to invest in many data migration drills. As we ran these in production, we also learned yet another valuable lesson:  Any data move from one source to another required efficient and intelligent data integrity checks that could be completed in a matter of seconds.

 

The second major change (as mentioned in the three insights above) was implementing persistence for our in-memory services.  This allowed us to rebuild our state in just a matter of a few seconds. Our analyses showed increasing amounts of time for rebuilds that were causing significant availability losses due to the state transfer using a full copy from primary to the secondary replicas. We also had a great collaboration (and very promising results) with Azure Service Fabric in implementing persistence with Reliable Collections.

 

The next major change was moving away from our home-grown pub/sub architecture which was showing signs of end-of-life.  We recognized that it was time to re-evaluate our assumptions about usage, data/traffic patterns, and designs so that we could assess whether the design was still valid and scalable for the changes we were seeing.  We found that, in the meantime, Azure had evolved significantly and now offered a much better solution that fit beyond what we could create.

 

The changes noted above represented what was essentially a re-architecture of Intune services, and this was a major project to undertake.  Ultimately, it would take a year to complete.  But, fortunately, this news did not catch us off guard; we had very early warning signs from the capacity models and the orange line thresholds which we had set earlier. These early warning signs gave us sufficient time to take proactive steps for scaling up, out, and for the re-architecture.

 

The results of the re-architecture were extremely impressive.  See below for Figures 2, 3, and 4 which summarize the results. Figure 2 shows that the P99 CPU usage dropped by more than 50%, Figure 3 shows that the P99 latency reduced by 65%, and Figure 4 shows that the rebuild performance for state transfer of 2.4M objects went from 10 minutes to 20 seconds.

 

 

Figure 2: P99 CPU Usage After Intune Services’ Re-Architecture

Image may be NSFW.
Clik here to view.
2.png

 

Figure 3: P99 Latency After Intune Services’ Re-Architecture

Image may be NSFW.
Clik here to view.
3.jpg

 

 

Figure 4: P99 Rebuild Times After Intune Services’ Re-Architecture

Image may be NSFW.
Clik here to view.
4.png

 

Learnings

Through this process, we learned 3 critical things that are applicable to any large-scale cloud service:

  1. Every data move that copies or moves data from one location to another must have data integrity checks to make sure that the copied data is consistent with the source data.
    • This is a critical part of ensuring that there is no data loss and this has to be done before switching over and making the new data as active and/or authoritative.
    • There are a variety of efficient/intelligent ways to achieve this without requiring an excessive amount of time or memory – but this is the topic of another blog. 😊 
  1. It is a very bad idea to invent your own database (No-SQL or SQL, etc), unless you are already in the database business.
    • Instead, leverage the infrastructures and solutions that have already been proven to work, and have been built by teams whose purpose is to build and maintain the databases.
    • If you do attempt to do this yourself, you will inevitably encounter the same problems, waste precious time re-inventing the solutions, and then spend even more time maintaining your database instead of spending the time with the business logic. 
  1. Finally, the experiences detailed above taught us that it’s far better to over-provision than over-optimize.
    • In our case, because we set our orange lines thresholds low, it gave us sufficient time to react and re-architect. This mean, of course, that we were over provisioned, but it was a huge benefit to our customers.

 

Conclusion

After the rollout of our re-architecture, the capacity charts immediately showed a significant improvement. The reliability of our capacity models, as well as the ability to scale up and out, gave us enough confidence to increase the thresholds for orange and red lines to higher numbers. Today, most of our clusters are under the orange line, and we continue to constantly evaluate and examine the capacity planning models – and we also use them to load balance our clusters globally.

 

By doing these things we were ready and able to evolve our tools and optimize our resources.  This, in turn, allowed us to scale better, improve SLAs, and increase the agility of our engineering teams.  I’ll cover this in Part 3.

Intune’s journey to a highly scalable globally distributed cloud service

Earlier this year, I published the 1st blog post in a 4-part series that examines Intune’s journey to become a global, scalable cloud service.  Today, in Part 2, I’ll explain the three proactive actions we took to prepare for immediate future growth. The key things we learned along the way are summarized at the end. 

 

While this blog primarily discusses engineering learnings, if you are an Intune administrator, I hope this blog gives you an added level of confidence on the service that you depend on every day; there is extraordinary amount of dedication and thought that go into building, operating, scaling and most importantly continuously improving Intune as a service.  I hope some of the learnings in this blog are also applicable to you, we certainly learned a ton over the years on the importance of data driven analysis and planning.

 

To quickly recap Part 1 in this series, the four key things we learned from re-building Intune were:

  1. Make telemetry and alerting one of the most critical parts of your design – and continue to refine the telemetry and alerting after the feature is in production.
  2. Know your dependencies. If your scale solution doesn’t align to your dependent platform, all bets are off.
  3. Continually validate your assumptions. Many cloud services/platforms are still evolving, and assumptions from 1 month ago may no longer be valid.
  4. Make it a priority to do capacity This is the difference between being reactive and proactive for scale issues.

With all of that in mind, here (in chronological order) are the actions we took based on what we learned:

Action #1:  Fostering a Data-driven Culture

Deciding to make our culture and decision-making ethos entirely data-driven was our absolute top priority.  When we realized that the data and telemetry available to us could be core parts of engineering the Intune services, the decision was obvious.  But we went further by making the use of data a fundamental part of every person’s job and every step we took with the product.

 

To entrench data-driven thinking into our teams, we took a couple different approaches:

  • Moneyball training
  • Repeated emphasis in daily standups and data examinations.
  • Instituting weekly and monthly post-mortem reviews as well as Intune-wide service, SLA, and incident reviews.

 

In other words:  We took every opportunity, in any incident or meeting, to emphasize data usage – and we kept doing it until the culture shift to a hypothesis-driven engineering mindset became a natural part of our behavior.  Once we had this, every feature that we built had telemetry and alerting in place and it was verified in our pre-production environments before releasing to customers in production.

 

Now, every time we found a gap in telemetry and/or alerting in production, we could make it a high priority to track and fix the gaps.  This continues to be a core part of our culture today.

 

The result of this change was dramatic and measurable.  For example, before this culture change, we didn’t have access to (nor did we track) telemetry on how many customer incidents we could detect via our internal telemetry and alerting mechanism.  Now, a majority of our core customer scenarios are detected by internal telemetry, and our goal is to get this to > 90%.

 

Action #2:  Capacity Planning

Having predictive capacity analysis within Intune was something we simply could not live without.  We had to have a way to take proactive actions by anticipating potential scale limits much earlier than they actually happened.  To do this, we invested in predictive models by analyzing all our scenarios, their traffic and call patterns, and their resource consumptions.

 

The modeling was a fairly complex and automated process, but here it is at a high level:

  • This model resulted in what we called workload units.
  • A workload unit is defined as a resource-consuming operation.
    • For example, a user login may equal to 1 workload unit while a device login may equal to 4 workload units – i.e., 4 users consume similar number of resources as 1 device.
  • A resource is defined by looking at a variety of metrics:
    • CPU cores
    • Memory
    • Network
    • Storage
    • Disk space
    • And evaluating the most limiting resource(s).
  • Typically, this turns out to be CPU and/or memory.
  • Using the workload definitions, we generated capacity in terms of workload units.
    • For example, if 1 user consumed 0.001% of CPU, 1 CPU core would equate to a capacity of 100,000 user workload units.
    • That is, we can support a max of 100,000 users or 25,000 devices (since 4 users == 1 device) or combinations of them with 1 CPU core.
  • We then compute the total capacity (i.e., max workloads) of the cluster based on the number of nodes in the cluster.

 

Once we had defined the capacities and workloads units, we could easily chart the maximum workload units we could support, the existing usage, and be alerted anytime the threshold exceeded a pre-defined percentage so that we could take proactive steps.

 

Initially, our thresholds were 45% of capacity as “red” line, and 30% as “orange” line to account for any errors in our models.  We also chose a preference toward over-provisioning rather than over-optimizing for perf and scale. A snapshot of such a chart is included below in Figure 1. The blue bars represent our maximum capacity, the black lines represent our current workloads, and the orange and red lines represent their respective thresholds. Each blue bar represents one ASF cluster (refer to  first blog on ASF). Over time, once we verified our models, we increased our thresholds significantly higher.

 

Figure 1: Intune’s Predictive Capacity Model

Image may be NSFW.
Clik here to view.
1.png

 

Action #3:  A Re-Architecture Resulting from Capacity Prediction

The results of the capacity modeling and prediction we designed turned out to be a major eye-opener. As you can see in Figure 1, we were above the “orange” line for many of our clusters, and this indicated that we needed to take some actions. From this data (and upon further analyses of our services, cluster, and a variety of metrics), we drew the following very valuable three insights:

  • Our limiting factor was the node as well as cluster This meant we had to scale up and out.
  • Our architecture with stateful in-memory services required persistence so that secondary replicas could rebuild from on-node disk states rather than performing a full copy state transfer every time the secondary replica started (g. such as process, node restarts, etc).
  • Our messaging (pub/sub) architecture needed to be replaced from a home-grown solution with Azure Event Hubs so that we could leverage the platform that satisfied our needs of high throughput and scale.

 

We quickly realized that even though we could scale out, we could not scale our nodes up from the existing SKUs because we were running on pinned clusters. In other words, it was not possible to upgrade these nodes to a higher and more powerful D15 Azure SKU (running 3x CPU cores, 2.5x memory, SSDs, etc).  As noted in Learning #2 above, learning that that an in-place upgrade of the cluster with higher SKU was not possible was a big lesson for us.  As a result, we had to stand up an entirely new cluster with the new nodes – and, since all our data was in-memory, this meant that we needed to perform a data migration from the existing cluster to the new cluster.

 

This type of data migration from one cluster to another cluster was not something we had ever practiced before, and it required us to invest in many data migration drills. As we ran these in production, we also learned yet another valuable lesson:  Any data move from one source to another required efficient and intelligent data integrity checks that could be completed in a matter of seconds.

 

The second major change (as mentioned in the three insights above) was implementing persistence for our in-memory services.  This allowed us to rebuild our state in just a matter of a few seconds. Our analyses showed increasing amounts of time for rebuilds that were causing significant availability losses due to the state transfer using a full copy from primary to the secondary replicas. We also had a great collaboration (and very promising results) with Azure Service Fabric in implementing persistence with Reliable Collections.

 

The next major change was moving away from our home-grown pub/sub architecture which was showing signs of end-of-life.  We recognized that it was time to re-evaluate our assumptions about usage, data/traffic patterns, and designs so that we could assess whether the design was still valid and scalable for the changes we were seeing.  We found that, in the meantime, Azure had evolved significantly and now offered a much better solution that fit beyond what we could create.

 

The changes noted above represented what was essentially a re-architecture of Intune services, and this was a major project to undertake.  Ultimately, it would take a year to complete.  But, fortunately, this news did not catch us off guard; we had very early warning signs from the capacity models and the orange line thresholds which we had set earlier. These early warning signs gave us sufficient time to take proactive steps for scaling up, out, and for the re-architecture.

 

The results of the re-architecture were extremely impressive.  See below for Figures 2, 3, and 4 which summarize the results. Figure 2 shows that the P99 CPU usage dropped by more than 50%, Figure 3 shows that the P99 latency reduced by 65%, and Figure 4 shows that the rebuild performance for state transfer of 2.4M objects went from 10 minutes to 20 seconds.

 

 

Figure 2: P99 CPU Usage After Intune Services’ Re-Architecture

Image may be NSFW.
Clik here to view.
2.png

 

Figure 3: P99 Latency After Intune Services’ Re-Architecture

Image may be NSFW.
Clik here to view.
3.jpg

 

 

Figure 4: P99 Rebuild Times After Intune Services’ Re-Architecture

Image may be NSFW.
Clik here to view.
4.png

 

Learnings

Through this process, we learned 3 critical things that are applicable to any large-scale cloud service:

  1. Every data move that copies or moves data from one location to another must have data integrity checks to make sure that the copied data is consistent with the source data.
    • This is a critical part of ensuring that there is no data loss and this has to be done before switching over and making the new data as active and/or authoritative.
    • There are a variety of efficient/intelligent ways to achieve this without requiring an excessive amount of time or memory – but this is the topic of another blog. 😊 
  1. It is a very bad idea to invent your own database (No-SQL or SQL, etc), unless you are already in the database business.
    • Instead, leverage the infrastructures and solutions that have already been proven to work, and have been built by teams whose purpose is to build and maintain the databases.
    • If you do attempt to do this yourself, you will inevitably encounter the same problems, waste precious time re-inventing the solutions, and then spend even more time maintaining your database instead of spending the time with the business logic. 
  1. Finally, the experiences detailed above taught us that it’s far better to over-provision than over-optimize.
    • In our case, because we set our orange lines thresholds low, it gave us sufficient time to react and re-architect. This mean, of course, that we were over provisioned, but it was a huge benefit to our customers.

 

Conclusion

After the rollout of our re-architecture, the capacity charts immediately showed a significant improvement. The reliability of our capacity models, as well as the ability to scale up and out, gave us enough confidence to increase the thresholds for orange and red lines to higher numbers. Today, most of our clusters are under the orange line, and we continue to constantly evaluate and examine the capacity planning models – and we also use them to load balance our clusters globally.

 

By doing these things we were ready and able to evolve our tools and optimize our resources.  This, in turn, allowed us to scale better, improve SLAs, and increase the agility of our engineering teams.  I’ll cover this in Part 3.

Configuration Manager Peer Cache – Custom Reporting Examples

Hello all, my name is Seth Price and I am a Configuration Manager PFE. I recently had a customer with a large network environment and they wanted to enable Configuration Manager Peer Cache to help with network bandwidth optimization. They were looking for some reporting options to help determine where peer cache could benefit network utilization and what clients would be appropriate in these locations to enable as peer sources. This post provides custom report options to help identify peer cache source candidates and report on systems that are already configured as peer cache sources.

Background information

Peer Cache is a feature in Configuration Manager which expands on the capabilities of Branch Cache to optimize network utilization for content delivery. Peer Cache can be used to manage deployment of content to clients in remote locations.

https://docs.microsoft.com/en-us/sccm/core/plan-design/hierarchy/client-peer-cache

In a large network environment, it may be difficult to identify and track both subnets where Peer Cache could provide a benefit, and the best client options for enabling Peer Cache content sources in that subnet. Some of the considerations in this decision would include:

Enabling Peer Cache on subnet:

  • Number of workstation on subnet
  • Network location (Connection speed to DP in boundary group)

Enabling a client as a Peer Cache source

  • Client OS
  • CCM client version (Does it support Peer Cache)
  • Network Connection type (Wired vs Wireless)
  • System chassis type
    • Example –Chassis type = 3,6, or 7 (Desktop, Mini Tower, or Tower)

      This would exclude systems types you may not want to use as a content source such as laptops, notebooks, hand helds and All in One systems.

  • Available system drive space

Here are a few examples of creating custom reports to assist customers with managing Peer Cache.

***Report requirements***

  1. Hardware inventory classes

    Hardware inventory will need to be configured to collect the following WMI classes:

    1. Rootccmsoftmgmtagent (CacheConfig) Specifically class ‘Size’

      Required to get the CCM cache size on systems

    2. Rootccmpolicymachineactualconfig (CCM_SuperPeerClientConfig) Specifically class ‘CanBeSuperPeer’
    3. System Enclosure (Win32_SystemEnclosure) –Chassis Types
  1. Update AD System Discovery to add the following AD attribute “OperatingSystem”

*Note – Instructions for configuring requirements including system discovery and hardware inventory are at the end of this post.

Download the .rdl files for both custom reports here:

https://github.com/setprice2245/Peer-Cache

Report 1

PE Peer Cache Candidate Dashboard

Image may be NSFW.
Clik here to view.

This report lists the AD sites and the number of subnets associated with each site. Expanding the site and specific subnet will provide details on the client count in that site and the number of Peer Cache content source candidates.

The details of the client in that site are listed and color coded for Peer Cache candidate status.

Green = (Peer Cache is already enabled)

Blue = (System meets to criteria to be recommended as a Peer Cache candidate)

Gray = (System does NOT meet criteria for Peer Cache candidate)

In this report the client system must meet the following criteria to be displayed in BLUE for Peer Cache capable.

Note- The data used for candidate criteria is based on hardware inventory. Based on hardware inventory configuration this data may or may not be current (Default hardware inventory is 7 days)

  • OS version (Like %Windows% NOT like %Server%)
  • Ethernet connection (Adaptertype0) = ‘Ethernet 802.3’
  • IPAddress0 like ‘%.%.%.%’ (Not Null)
  • Free space on system drive is > 20 GB
  • CCM Client is Active
  • Client version 5.00.8540.1000 or later
  • Chassis type in (3,6,7) – Desktop, Min Tower, or Tower

Note: The attached report will not list Server operating systems but I do have them enabled for display in the example report screenshot.

Report 2

PE Peer Cache Enabled Clients

Image may be NSFW.
Clik here to view.

This report lists all systems that have the Peer Cache client enabled and system details such as chassis type, free system drive space, CCM cache size, client status, Client version, OS name, AD site, and default gateway

Configuring system discovery and hardware inventory requirements

  1. In the Config Mgr console under Administration > Hierarchy Configuration > Discovery Methods > Active Directory System Discovery > Properties > Add attribute operatingsystem

    Then start a system discovery

  2. Add required classes to hardware inventory.

    Under Administration > Client Settings > Modify the Default Client Setting

    Edit Hardware inventory and click Set Classes…

    Add System Enclosure (Win32_SystemEnclosure) class = Chassis Types as shown below

    Image may be NSFW.
    Clik here to view.

    For the next classes, select Set Classes…, then select Add.

    Click Connect

    Under WMI namespace type Rootccmsoftmgmtagent and select Recursive as shown below.

    *Note – You may need to run the Config Mgr console as administrator to have access.

    Image may be NSFW.
    Clik here to view.

    Select CacheConfig and select OK

    Image may be NSFW.
    Clik here to view.

    Back in hardware inventory classes, find CacheConfig (CacheConfig) and select the Size class as shown.

    Image may be NSFW.
    Clik here to view.

    1. Repeat this process to add class Rootccmpolicymachineactualconfig (CCM_SuperPeerClientConfig) -Specifically class ‘CanBeSuperPeer’

    Image may be NSFW.
    Clik here to view.

    Image may be NSFW.
    Clik here to view.

    After we have added the new hardware inventory classes to the default client settings policy, we need to run a machine policy evaluation on a clliented system, then run a hardware inventory to update the database.

    Next, we can browse to our report server website and import the .rdl files included in this post.

    *Note- Make sure to edit the report and change the data source to your database.

    Image may be NSFW.
    Clik here to view.

Thank you for reading this post, you should now be able to run both custom reports. Please provide feedback if the reports are useful or if you would like to see additional data in either of the reports.

Turkey Day Mailbag

Hello Networking Enthusiasts – Tomorrow, the US will celebrate Thanksgiving and since we’re so close to a holiday we decided to keep this week’s blog fairly simple and answer some common questions and information we’ve seen over the last few months.

If you have follow-up questions you’d like answered (or more details on what’s below), hit us up on Twitter @ Microsoft SDN!

RDMA and HCI

Q. Network traffic from Live Migrations takes valuable CPU cycles from my tenant VMs. How can I reduce the impact of a live migration for tenants, increase the number of live migrations I can perform, and/or increase the speed of the live migrations?

Answer from RDMA PM, Dan Cuomo:

Although not the default option, SMB can be selected as the live migration mechanism.  If selected, SMB can use RDMA under the hood (in this context, known as SMBDirect), which avoids the need to process the GB’s (yes, Gigabytes not bits) of network traffic produced from the live migration (e.g VM Memory or VHDX Storage).

RDMA by-passes the host operating system and removes the processing burden of the live migrations.  Since host networking is most commonly constrained by host CPU spreading (remember your VMs are competing for access to the same cores processing network traffic), RDMA eases the effect of the live migration on VMs on the same host as they can now continue to focus on the VMs CPU scheduling needs.

The net effect is an increase to the number of live migrations you can perform at once because the CPU is no longer the bottleneck for the network or affecting your tenant VMs.

Software Defined Networking

Q. How do I get support deploying Software Define Networking?
Answer from SDN PM, Schumann Ge
There are a ton of resources available and we’d recommend you’d start with our documentation here.  However if you’d like to speak to an expert, our field engineers would be glad to assist.  Contact them at SDNBlackbelt@Microsoft.com or hit-up Microsoft SDN on twitter!

Containers

Q. Does Red Hat OpenShift support Windows Containers?  Where can I find out more about Red Hat OpenShift?  What is the roadmap of supporting Windows Containers with Kubernetes?
We’re posting this answer from Containers PM, Mike Kosteritz under protest because it’s technically three questions…
See the blog post “Managing Windows containers with Red Hat OpenShift Container Platform 3.11” for an overview what is coming in this space.
General information on OpenShift is available on the https://www.openshift.com/products website. If you have Windows specific questions please post a comment to the blog post at “Managing Windows containers with Red Hat OpenShift Container Platform 3.11

Networking Diagnosis Tools

Q. How do I review all the pertinent Networking information on my system.  I’m not sure I know all the cmdlet’s I need or how to put the data together into a cohesive view of my system.
Answer from Datapath PM, Dan Cuomo:
Get-NetView is a nifty script that curates all the pertinent networking information into a single zip file for portability.  It even grabs the data about the VMs sitting on system.  If you’re one of the many customer’s we’ve worked with over the last year or so, you’ve no doubt had to run this command and send us the output for review.  Also, this tool is integrated into Get-SDDCDiagnosticInfo cmdlet you’ve no doubt run when troubleshooting Storage Spaces Direct.
Once extracted to a folder, we’d recommend using Visual Studio Code for review of the contents of the folder.  Check out Get-NetView on GitHub


Happy Turkey Day,
Windows Core Networking Team

PowerShell Constrained Language mode and the Dot-Source Operator

PowerShell Constrained Language mode and the Dot-Source Operator

PowerShell works with application control systems, such as AppLocker and Windows Defender Application Control (WDAC), by automatically running in
ConstrainedLanguage mode. ConstrainedLanguage mode restricts some exploitable aspects of PowerShell while still giving you a rich shell to run commands and scripts in. This is different from usual application white listing rules, where an application is either allowed to run or not.

But there are times when the full power of PowerShell is needed, so we allow script files to run in FullLanguage mode when they are trusted by the policy. Trust can be indicated through file signing or other policy mechanisms such as file hash. However, script typed into the interactive shell is always run constrained.

Since PowerShell can run script in both Full and Constrained language modes, we need to protect the boundary between them. We don’t want to leak variables or functions between sessions running in different language modes.

The PowerShell dot-source operator brings script files into the current session scope. It is a way to reuse script. All script functions and variables defined in the script file become part of the script it is dot sourced into. It is like copying and pasting text from the script file directly into your script.

# HelperFn1, HelperFn2 are defined in HelperFunctions.ps1
# Dot-source the file here to get access to them (no need to copy/paste)
. c:ScriptsHelperFunctions.ps1
HelperFn1
HelperFn2

This presents a problem when language modes are in effect with system application control. If an untrusted script is dot-sourced into a script with full trust then it has access to all those functions that run in FullLanguage mode, which can result in application control bypass through arbitrary code execution or privilege escalation. Consequently, PowerShell prevents this by throwing an error when dot-sourcing is attempted across language modes.

Example 1:

System is in WDAC policy lock down. To start with, neither script is trusted and so both run in ConstrainedLanguage mode. But the HelperFn1 function uses method invocation which isn’t allowed in that mode.

PS> type c:MyScript.ps1
Write-Output "Dot sourcing MyHelper.ps1 script file"
. c:MyHelper.ps1
HelperFn1
PS> type c:MyHelper.ps1
function HelperFn1
{
    "Language mode: $($ExecutionContext.SessionState.LanguageMode)"
    [System.Console]::WriteLine("This can only run in FullLanguage mode!")
}
PS> c:MyScript.ps1
Dot sourcing MyHelper.ps1 script file
Language mode: ConstrainedLanguage
Cannot invoke method. Method invocation is supported only on core types in this language mode.
At C:MyHelper.ps1:4 char:5
+     [System.Console]::WriteLine("This cannot run in ConstrainedLangua ...
+     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [], RuntimeException
    + FullyQualifiedErrorId : MethodInvocationNotSupportedInConstrainedLanguage

Both scripts are untrusted and run in ConstrainedLanguage mode, so dot-sourcing the MyHelper.ps1 file works. However, the HelperFn1 function performs method invocation that is not allowed in ConstrainedLanguage and fails when run. MyHelper.ps1 needs to be signed as trusted so it can run at FullLanguage.

Next we have mixed language modes. MyHelper.ps1 is signed and trusted, but MyScript.ps1 is not.

PS> c:MyScript.ps1
Dot sourcing MyHelper.ps1 script file
C:MyHelper.ps1 : Cannot dot-source this command because it was defined in a different language mode. To invoke this command without importing its contents, omit the '.' operator.
At C:MyScript.ps1:2 char:1
+ . 'c:MyHelper.ps1'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [MyHelper.ps1], NotSupportedException
    + FullyQualifiedErrorId : DotSourceNotSupported,MyHelper.ps1
...

And we get a dot-source error because we are trying to dot-source script that has a different language mode than the session it is being dot-sourced into.

Finally, we sign as trusted both script files and everything works.

PS> c:MyScript.ps1
Dot sourcing MyHelper.ps1 script file
Language mode: FullLanguage
This can only run in FullLanguage mode!

The lesson here is to ensure all script components run in the same language mode on policy locked down systems. If one component must run in FullLanguage mode, then all components should run in FullLanguage mode. This means validating that each component is safe to run in FullLanguage and indicating they are trusted to the application control policy.

So this solves all language mode problems, right? If FullLanguage is not needed then just ensure all script components run untrusted, which is the default condition. If they require FullLanguage then carefully validate all components and mark them as trusted. Unfortuantely, there is one case where this best practice doesn’t work.

PowerShell Profile File

The PowerShell profile file (profile.ps1) is loaded and run at PowerShell start up. If that script requires FullLanguage mode on policy lock down systems, you just validate and sign the file as trusted, right?

Example 2:

PS> type c:users<user>DocumentsWindowsPowerShellprofile.ps1
Write-Output "Running Profile"
[System.Console]::WriteLine("This can only run in FullLanguage!")
# Sign file so it is trusted and will run in FullLanguage mode
PS> Set-AuthenticodeSignature -FilePath .Profile.ps1 -Certificate $myPolicyCert
# Start a new PowerShell session and run the profile script
PS> powershell.exe
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
C:Users<user>DocumentsWindowsPowerShellprofile.ps1 : Cannot dot-source this command because it was defined in a different language mode. To invoke this command without importing its contents, omit the '.' operator.
At line:1 char:1
+ . 'C:Users<user>DocumentsWindowsPowerShellprofile.ps1'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [profile.ps1], NotSupportedException
    + FullyQualifiedErrorId : DotSourceNotSupported,profile.ps1

What gives? The profile.ps1 file was signed and is policy trusted. Why the error?
Well, the issue is that PowerShell dot-sources the profile.ps1 file into the default PowerShell session, which must run in ConstrainedLanguage because of the policy. So we are attempting to dot-source a FullLanguage script into a ConstrainedLanguage session, and that is not allowed. This is a catch 22 because if the profile.ps1 is not signed, it may not run if it needs FullLanguage privileges (e.g., invoke methods). But if you sign it, it still won’t run because of how it is dot-sourced into the current ConstrainedLanguage interactive session.

Unfortunately, the only solution is to keep the profile.ps1 file fairly simple so that it does not need FullLanguage, and refrain from making it trusted. Keep in mind that this is only an issue when running with application control policy. Otherwise, language modes do not come into play and PowerShell profile files run normally.

Paul Higinbotham
Senior Software Engineer
PowerShell Team

Microsoft Azure Backup Server(MABS) V3 is now available

Microsoft Azure Backup Server(MABS) V3 is now available for download. Please refer to the Microsoft Help Article 4457852 for new features and critical bug fixes in MABS V3. MABS V3 combines all features and bug fixes from DPM 1801, DPM 1807, DPM 2016 UR5 & DPM 2016 UR6.

This upgrade is available for download from the Microsoft Download Center. Refer to the following link for download and installation instructions for MABS v3:

Download Microsoft Azure Backup Server v3

For information about how to download Microsoft support files, click the following article number to view the article in the Microsoft Knowledge Base:

119591 How to obtain Microsoft support files from online services


Infrastructure + Security: Noteworthy News (November, 2018)

Hi there! This is Stanislav Belov here, and you are reading the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis.

Microsoft Azure
A window to the cloud: Microsoft unveils new Azure Cloud Collaboration Center

As more businesses around the world adopt Azure — including 95 percent of the Fortune 500 — Microsoft has introduced a powerful new solution to enhance the performance and security of its cloud. The Azure Cloud Collaboration Center (CCC) is a new, state-of-the-art, 8,000-square-foot facility on Microsoft’s Redmond, Washington corporate campus. The centralized workspace allows engineering teams to come together to resolve operational issues and unexpected events that could impact customers.

Microsoft Azure portal November 2018 update
This month, we’re introducing a new way for you to switch between different Azure accounts without having to log-off and log-in again, or working with multiple browser tabs. We’ve also made enhancements to the way you find what you need in the Azure Marketplace, to the management experience for Site Recovery, Access Control, and database services.
What’s new in PowerShell in Azure Cloud Shell
At Microsoft Ignite 2018, PowerShell in Azure Cloud Shell became generally available. Azure Cloud Shell provides an interactive, browser-accessible, authenticated shell for managing Azure resources from virtually anywhere. With multiple access points, including the Azure portal, the stand-alone experience, Azure documentation, the Azure mobile app, and the Azure Account Extension for Visual Studio Code, you can easily gain access to PowerShell in Cloud Shell to manage and deploy Azure resources.
Simplified restore experience for Azure Virtual Machines
Azure Backup now offers an improved restore experience for Azure Virtual Machines by leveraging the power of ARM templates and Azure Managed Disks. The new restore experience directly creates managed disk(s) and virtual machine (VM) templates. This eliminates the manual process of executing scripts or PowerShell commands to convert and configure the .VHD file, and complete the restore operation. There is zero manual intervention after the restore is triggered making it truly a single-click operation for restoring IaaS VMs.
Azure status
Check the current health of Azure services by region and product, or create your own personalized dashboard.
Holiday season is DDoS season
DDoS is an ever-growing problem, and the types of attacks are getting increasingly sophisticated. More importantly, DDoS attacks are often used as a “smokescreen,” masking more malicious and harmful infiltration of your resources. The technology to create DDoS attacks continues to increase in sophistication while the cost and ability to instigate these attacks get more and more accessible. Therefore, driving up the frequency and ease at which criminals can wreak havoc on businesses and users.
What is group-based licensing in Azure Active Directory?
Microsoft paid cloud services, such as Office 365, Enterprise Mobility + Security, Dynamics 365, and other similar products, require licenses. These licenses are assigned to each user who needs access to these services. To manage licenses, administrators use one of the management portals (Office or Azure) and PowerShell cmdlets. Azure Active Directory (Azure AD) is the underlying infrastructure that supports identity management for all Microsoft cloud services. Azure AD stores information about license assignment states for users.
Windows Server
Express updates for Windows Server 2016 re-enabled for November 2018 update

Starting with the November 13, 2018 Update Tuesday, Windows will again publish Express updates for Windows Server 2016. Express updates for Windows Server 2016 stopped in mid-2017 after a significant issue was found that kept the updates from installing correctly. While the issue was fixed in November 2017, the update team took a conservative approach to publishing the Express packages to ensure most customers would have the November 14, 2017 update (KB 4048953) installed on their server environments and not be impacted by the issue.

Use Azure Site Recovery to migrate Windows Server 2008 before End of Support

Don’t let the name fool you. Azure Site Recovery (ASR) can be used as an Azure migration tool for 30 days at no charge. It has been used for years to support migration of our 64-bit versions of Windows Server, and we are pleased to announce it now supports migration of Windows Server 2008 32-bit applications to Azure Virtual Machines.

Server Core and Server with Desktop: Which one is best for you

For most server scenarios, the Server Core installation option is the best (and recommended) choice. A Server Core installation is almost entirely headless, light weight, and ideally suited for large datacenters and clouds, both physical and virtual. Server Core’s smaller footprint comes with a smaller attack surface, making it less vulnerable than the Server with Desktop Experience option. That same smaller footprint means Server Core requires less disk space and consumes less of your network bandwidth (when you migrate VMs or roll out a large environment). With the new Windows Admin Center management capabilities, Server Core is easier than ever to manage, whether you like PowerShell scripts or a modern, graphical portal.

Windows Client
What’s new in Windows 10, version 1809

In this article we describe new and updated features of interest to IT Pros for Windows 10, version 1809. This update also contains all features and fixes included in previous cumulative updates to Windows 10, version 1803.

Security
Detecting fileless attacks with Azure Security Center

As the security solutions get better at detecting attacks, attackers are increasingly employing stealthier methods to avoid detection. In Azure, we regularly see fileless attacks targeting our customers’ endpoints. To avoid detection by traditional antivirus software and other filesystem-based detection mechanisms, attackers inject malicious payloads into memory. Attacker payloads surreptitiously persist within the memory of compromised processes and perform a wide range of malicious activities.

PAW deployment guide
There a few different options to deploy PAW, in this blogpost, we’ll focus on the solution which was evaluated in the PAW TAP program. The general feedback was positive, and customer liked the singled device configuration. The solution leverages the shielded VM built in Windows 10 1709 to run secure workload, it includes the client configuration (end user device) and server backend.
Leverage Azure Security Center to detect when compromised Linux machines attack
When an attacker compromises a machine, they typically have a goal in mind. Some attackers are looking for information residing on the victim’s machine or are looking for access to other machines on the victim’s network. Other times, attackers have plans to use the processing power of the machine itself or even use the machine as a launch point for other attacks. While on Linux virtual machines (VM) in Microsoft Azure we most commonly see attackers installing and running cryptocurrency mining software. This blog post will focus on the latter when an attacker wants to use the compromised machine as a launch point for other attacks.
The evolution of Microsoft Threat Protection, November update
At Ignite 2018, we announced Microsoft Threat Protection, a comprehensive, integrated solution securing the modern workplace across identities, endpoints, user data, cloud apps, and, infrastructure. Engineers across teams at Microsoft are collaborating to unlock the full, envisioned potential of Microsoft Threat Protection. Throughout this journey, we want to keep you updated on its development.
What’s new in Windows Defender ATP
We added new capabilities to each of the pillars of Windows Defender ATP’s unified endpoint protection platform: improved attack surface reduction, better-than-ever next-gen protection, more powerful post-breach detection and response, enhanced automation capabilities, more security insights, and expanded threat hunting. These enhancements boost Windows Defender ATP and accrue to the broader Microsoft Threat Protection, an integrated solution for securing identities, endpoints, cloud apps, and infrastructure.
Windows Defender Antivirus can now run in a sandbox
Windows Defender Antivirus has hit a new milestone: the built-in antivirus capabilities on Windows can now run within a sandbox. With this new development, Windows Defender Antivirus becomes the first complete antivirus solution to have this capability and continues to lead the industry in raising the bar for security. Putting Windows Defender Antivirus in a restrictive process execution environment is a direct result of feedback that we received from the security industry and the research community. It was a complex undertaking: we had to carefully study the implications of such an enhancement on performance and functionality. More importantly, we had to identify high-risk areas and make sure that sandboxing did not adversely affect the level of security we have been providing.
Vulnerabilities and Updates
ADV180028 | Guidance for configuring BitLocker to enforce software encryption

Microsoft is aware of reports of vulnerabilities in the hardware encryption of certain self-encrypting drives (SEDs). Customers concerned about this issue should consider using the software only encryption provided by BitLocker Drive EncryptionImage may be NSFW.
Clik here to view.
™
. On Windows computers with self-encrypting drives, BitLocker Drive EncryptionImage may be NSFW.
Clik here to view.
™
manages encryption and will use hardware encryption by default. Administrators who want to force software encryption on computers with self-encrypting drives can accomplish this by deploying a Group Policy to override the default behavior. Windows will consult Group Policy to enforce software encryption only at the time of enabling BitLocker.

Resuming the rollout of the Windows 10 October 2018 Update

In early October, we paused the rollout of the Windows 10 October 2018 Update as we investigated isolated reports of users missing files after updating. We take any case of data loss seriously, and we have thoroughly investigated and resolved all related issues. For our commercial customers, the re-release date of the Windows 10 version 1809 is November 13, 2018 (this includes Windows Server 2019 and Windows Server, version 1809). This date marks the revised start of the servicing timeline for the Semi-Annual Channel (“Targeted”) release.

Support Lifecycle
End of Support for SCEP for Mac and SCEP for Linux on December 31, 2018

Support for System Center Endpoint Protection (SCEP) for Mac and Linux (all versions) ends on December 31, 2018. Availability of new virus definitions for SCEP for Mac and SCEP for Linux may be discontinued after the end of support. This discontinuation may occur without notice. If you are using any version of SCEP for Mac or SCEP for Linux, plan to migrate to a replacement endpoint protection product for Mac and Linux clients.

Extended Security Updates for SQL Server and Windows Server 2008/2008 R2: Frequently Asked Questions (PDF)

On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. That means the end of regular security updates. Don’t let your infrastructure and applications go unprotected. We’re here to help you migrate to current versions for greater security, performance and innovation.

Products reaching End of Support for 2018

Products reaching End of Support for 2019

Products reaching End of Support for 2020

Microsoft Premier Support News
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

Automating Security workflows with Microsoft’s CASB and MS Flow

As Security is becoming an increasingly greater concern and priority in organizations of all sizes, the role and importance of Security Operations Centers (SOC) continues to expand. While end users leverage new cloud apps and services on nearly a daily basis, Security professional remain a scarce resource. Consequently, SOC teams are looking for solutions that help automate processes where possible, to keep up with demand and streamline processes and reduce the amount of incidents that require their direct oversight and interaction.

Reduce your potential attack surface using Azure ATP Lateral Movement Paths

Azure Advanced Threat Protection (Azure ATP) provides invaluable insights on identity configurations and suggested security best-practices across the enterprise. A key component of Azure ATP’s insights is Lateral Movement Paths or LMPs. Azure ATP LMPs are visual guides that help you quickly understand and identify exactly how attackers can move laterally inside your network. The purpose of lateral movements within a cyber-attack kill chain are for attackers to gain and compromise your sensitive accounts towards domain dominance. Azure ATP LMPs provide easy to interpret, direct visual guidance on your most vulnerable sensitive accounts, assists in helping you mitigate and close access for potential attacker domain dominance.

 

Lateral movement attacks, using non-sensitive accounts to gain access to sensitive accounts, can be accomplished through many different techniques. The most popular methods used by attackers are credential theft and Pass the Ticket. In both methods, your non-sensitive accounts are used by attackers for lateral moves by exploiting machines that share stored log-in credentials in accounts, groups and machines with your sensitive accounts.

 

Where can I find Azure ATP LMPs?

Every computer or user profile discovered by Azure ATP has a Lateral movement paths tab.

 

The LMP tab provides different information depending on sensitivity of the entity:

  • Sensitive users – potential LMP(s) leading to this user are shown.
  • Non-sensitive users and computers – potential LMP(s) the entity is related to are shown.  

When you click the tab, Azure ATP displays the most recently discovered LMP. Each potential LMP is saved for 48 hours following discovery. You can view older LMPs by clicking on view a different date.

 

Image may be NSFW.
Clik here to view.
LMP1.png

 

V2.56 of Azure ATP adds two additional LMP capabilities. Discover when potential LMPs were identified and where.

 

When

From the Activities tab, we’ve added an indication when a new potential LMP were identified:

  • Sensitive users – when a new path was identified to a sensitive userImage may be NSFW.
    Clik here to view.
    LMP2.png

 

  • Non-sensitive users and computers – when this entity was identified in a potential LMP leading to a sensitive userImage may be NSFW.
    Clik here to view.
    LMP3.png

 

Where

LMP can now directly assists with your investigation process. Azure ATP security alert evidence lists provide the related entities that are involved in each potential lateral movement path. The evidence lists directly help your security response team increase or reduce the importance of the security alert and/or investigation of the related entities. For example, when a Pass the Ticket alert is issued, the source computer, compromised user and destination computer the stolen ticket was used from, are all part of the potential lateral movement path leading to a sensitive user.

 

The existence of the detected LMP makes investigating the alert and watching the suspected user even more important to prevent your adversary from additional lateral moves. Trackable evidence is provided in LMPs to make it easier and faster for you to prevent attackers from moving forward in your network.

 

Image may be NSFW.
Clik here to view.
LMP4.png

 

 

It’s never too late

Security insights are never too late to prevent the next attack and remediate damage. For this reason, investigating an attack during the domain dominance phase provides a different, but important example. Typically, while investigating a security alert such as Remote Code Execution, if the alert is a true positive, your domain controller may already be compromised. But where did the attacker gain privileges, and what was their path into your network? How can the attack be remediated? These are critical questions to answer in order to remediate the attack, recover and prevent the next one.  

 

Assuming your network architecture is standard, the compromised user running remote commands on the domain controller must be a sensitive user. As a sensitive user, Azure ATP has mapped and identified their potential LMPs. In a case where this user account is already compromised and succeeded at running commands on a domain controller, LMP is a fast, effective method to understanding. How did the attacker gain user credentials? How did they achieve lateral moves in your network towards domain dominance? Although LMPs are only potential methods, combining LMPs with security alerts can provide invaluable insights into how attackers were able to use lateral moves within your organization to achieve their goals and the steps you need to take to prevent them in the future.

 

Additional data formats

LMP data is also available in the Lateral Movement Paths to Sensitive Accounts report. This report lists the sensitive accounts that are exposed via lateral movement paths and includes paths that were selected manually for a specific time period or included in the time period for scheduled reports. Customize the included date range using the calendar selection.

 

Learn more about investigations using lateral movement paths.

 

Get Started Today

Leveraging the scale and intelligence of the Microsoft Intelligence Security Graph, Azure ATP   is part of Microsoft 365’s Enterprise Mobility + Security E5 suite.

 

 

DSC Resource Kit Release November 2018

We just released the DSC Resource Kit!

This release includes updates to 9 DSC resource modules. In the past 6 weeks, 61 pull requests have been merged and 67 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • AuditPolicyDsc
  • DFSDsc
  • NetworkingDsc
  • SecurityPolicyDsc
  • SharePointDsc
  • StorageDsc
  • xBitlocker
  • xExchange
  • xHyper-V

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

Our latest community call for the DSC Resource Kit was supposed to be today, November 28, but the public link to the call expired, so the call was cancelled. I will update the link for next time. If there is interest in rescheduling this call, the new call time will be announced on Twitter (@katiedsc or @migreene) The call for the next release cycle is also getting moved a week later than usual to January 9 at 12PM (Pacific standard time). Join us to ask questions and give feedback about your experience with the DSC Resource Kit.

The next DSC Resource Kit release will be on Wednesday, January 9.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don’t forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

Please see our documentation here for information on the support of these resource modules.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or CHANGELOG.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module Name Version Release Notes
AuditPolicyDsc 1.3.0.0
  • Update LICENSE file to match the Microsoft Open Source Team standard.
  • Added the AuditPolicyGuid resource.
DFSDsc 4.2.0.0
  • Add support for modifying staging quota size in MSFT_DFSReplicationGroupMembership – fixes Issue 77.
  • Refactored module folder structure to move resource to root folder of repository and remove test harness – fixes Issue 74.
  • Updated Examples to support deployment to PowerShell Gallery scripts.
  • Remove exclusion of all tags in appveyor.yml, so all common tests can be run if opt-in.
  • Added .VSCode settings for applying DSC PSSA rules – fixes Issue 75.
  • Updated LICENSE file to match the Microsoft Open Source Team standard – fixes Issue 79
NetworkingDsc 6.2.0.0
  • Added .VSCode settings for applying DSC PSSA rules – fixes Issue 357.
  • Updated LICENSE file to match the Microsoft Open Source Team standard – fixes Issue 363
  • MSFT_NetIPInterface:
    • Added a new resource for configuring the IP interface settings for a network interface.
SecurityPolicyDsc 2.6.0.0
  • Added SecurityOption – Network_access_Restrict_clients_allowed_to_make_remote_calls_to_SAM
  • Bug fix – Issue 105 – Spelling error in SecurityOption”User_Account_Control_Behavior_of_the_elevation_prompt_for_standard_users”
  • Bug fix – Issue 90 – Corrected value for Microsoft_network_server_Server_SPN_target_name_validation_level policy
SharePointDsc 3.0.0.0
  • Changes to SharePointDsc
    • Added support for SharePoint 2019
    • Added CredSSP requirement to the Readme files
    • Added VSCode Support for running SharePoint 2019 unit tests
    • Removed the deprecated resources SPCreateFarm and SPJoinFarm (replaced in v2.0 by SPFarm)
  • SPBlobCacheSettings
    • Updated the Service Instance retrieval to be language independent
  • SPConfigWizard
    • Fixed check for Ensure=Absent in the Set method
  • SPInstallPrereqs
    • Added support for detecting updated installation of Microsoft Visual C++ 2015/2017 Redistributable (x64) for SharePoint 2016 and SharePoint 2019.
  • SPSearchContentSource
    • Added support for Business Content Source Type
  • SPSearchMetadataCategory
    • New resource added
  • SPSearchServiceApp
    • Updated resource to make sure the presence of the service app proxy is checked and created if it does not exist
  • SPSecurityTokenServiceConfig
    • The resource only tested for the Ensure parameter. Added more parameters
  • SPServiceAppSecurity
    • Added support for specifying array of access levels.
    • Changed implementation to use Grant-SPObjectSecurity with Replace switch instead of using a combination of Revoke-SPObjectSecurity and Grant-SPObjectSecurity
    • Added all supported access levels as available values.
    • Removed unknown access levels: Change Permissions, Write, and Read
  • SPUserProfileProperty
    • Removed obsolete parameters (MappingConnectionName, MappingPropertyName, MappingDirection) and introduced new parameter PropertyMappings
  • SPUserProfileServiceApp
    • Updated the check for successful creation of the service app to throw an error if this is not done correctly The following changes will break v2.x and earlier configurations that use these resources:
  • Implemented IsSingleInstance parameter to force that the resource can only be used once in a configuration for the following resources:
    • SPAntivirusSettings
    • SPConfigWizard
    • SPDiagnosticLoggingSettings
    • SPFarm
    • SPFarmAdministrators
    • SPInfoPathFormsServiceConfig
    • SPInstall
    • SPInstallPrereqs
    • SPIrmSettings
    • SPMinRoleCompliance
    • SPPasswordChangeSettings
    • SPProjectServerLicense
    • SPSecurityTokenServiceConfig
    • SPShellAdmin
  • Standardized Url/WebApplication parameter to default WebAppUrl parameter for the following resources:
    • SPDesignerSettings
    • SPFarmSolution
    • SPSelfServiceSiteCreation
    • SPWebAppBlockedFileTypes
    • SPWebAppClientCallableSettings
    • SPWebAppGeneralSettings
    • SPWebApplication
    • SPWebApplicationAppDomain
    • SPWebAppSiteUseAndDeletion
    • SPWebAppThrottlingSettings
    • SPWebAppWorkflowSettings
  • Introduced new mandatory parameters
    • SPSearchResultSource: Added option to create Result Sources at different scopes.
    • SPServiceAppSecurity: Changed parameter AccessLevel to AccessLevels in MSFT_SPServiceAppSecurityEntry to support array of access levels.
    • SPUserProfileProperty: New parameter PropertyMappings
SharePointDsc 3.1.0.0
  • Changes to SharePointDsc
    • Updated LICENSE file to match the Microsoft Open Source Team standard.
  • ProjectServerConnector
    • Added a file hash validation check to prevent the ability to load custom code into the module.
  • SPFarm
    • Fixed localization issue where TypeName was in the local language.
  • SPInstallPrereqs
    • Updated links in the Readme.md file to docs.microsoft.com.
    • Fixed required prereqs for SharePoint 2019, added MSVCRT11.
  • SPManagedMetadataServiceApp
    • Fixed issue where Get-TargetResource method throws an error when the service app proxy does not exist.
  • SPSearchContentSource
    • Corrected issue where the New-SPEnterpriseSearchCrawlContentSource cmdlet was called twice.
  • SPSearchServiceApp
    • Fixed issue where Get-TargetResource method throws an error when the service application pool does not exist.
    • Implemented check to make sure cmdlets are only executed when it actually has something to update.
    • Deprecated WindowsServiceAccount parameter and moved functionality to new resource (SPSearchServiceSettings).
  • SPSearchServiceSettings
    • Added new resource to configure search service settings.
  • SPServiceAppSecurity
    • Fixed unavailable utility method (ExpandAccessLevel).
    • Updated the schema to no longer specify username as key for the sub class.
  • SPUserProfileServiceApp
    • Fixed issue where localized versions of Windows and SharePoint would throw an error.
  • SPUserProfileSyncConnection
    • Corrected implementation of Ensure parameter.
StorageDsc 4.3.0.0
  • WaitForDisk:
    • Added readonly-property isAvailable which shows the current state of the disk as a boolean – fixes Issue 158.
xBitlocker 1.3.0.0
  • Update appveyor.yml to use the default template.
  • Added default template files .gitattributes, and .vscode settings.
  • Fixes most PSScriptAnalyzer issues.
  • Fix issue where AutoUnlock is not set if requested, if the disk was originally encrypted and AutoUnlock was not used.
  • Add remaining Unit Tests for xBitlockerCommon.
  • Add Unit tests for MSFT_xBLTpm
  • Add remaining Unit Tests for xBLAutoBitlocker
  • Add Unit tests for MSFT_xBLBitlocker
  • Moved change log to CHANGELOG.md file
  • Fixed Markdown validation warnings in README.md
  • Added .MetaTestOptIn.json file to root of module
  • Add Integration Tests for module resources
  • Rename functions with improper Verb-Noun constructs
  • Add comment based help to any functions without it
  • Update Schema.mof Description fields
  • Fixes issue where Switch parameters are passed to Enable-Bitlocker even if the corresponding DSC resource parameter was set to False (Issue 12)
xExchange 1.25.0.0
  • Opt-in for the common test flagged Script Analyzer rules (issue 234).
  • Opt-in for the common test testing for relative path length.
  • Removed the property PSDscAllowPlainTextPassword from all examples so the examples are secure by default. The property PSDscAllowPlainTextPassword was previously needed to (test) compile the examples in the CI pipeline, but now the CI pipeline is using a certificate to compile the examples.
  • Opt-in for the common test that validates the markdown links.
  • Fix typo of the word “Certificate” in several example files.
  • Add spaces between array members.
  • Add initial set of Unit Tests (mostly Get-TargetResource tests) for all remaining resource files.
  • Add WaitForComputerObject parameter to xExchWaitForDAG
  • Add spaces between comment hashtags and comments.
  • Add space between variable types and variables.
  • Fixes issue where xExchMailboxDatabase fails to test for a Journal Recipient because the module did not load the Get-Recipient cmdlet (335).
  • Fixes broken Integration tests in MSFT_xExchMaintenanceMode.Integration.Tests.ps1 (336).
  • Fix issue where Get-ReceiveConnector against an Absent connector causes an error to be logged in the MSExchange Management log.
  • Rename poorly named functions in xExchangeDiskPart.psm1 and MSFT_xExchAutoMountPoint.psm1, and add comment based help.
xHyper-V 3.14.0.0
  • MSFT_xVMHost:
    • Added support to Enable / Disable VM Live Migration. Fixes Issue 155.

How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module’s name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available starting in WMF 5.0) to find modules with DSC Resources:

# To list all modules that tagged as DSCResourceKit
Find-Module -Tag DSCResourceKit 
# To list all DSC resources from all sources 
Find-DscResource

Please note only those modules released by the PowerShell Team are currently considered part of the ‘DSC Resource Kit’ regardless of the presence of the ‘DSC Resource Kit’ tag in the PowerShell Gallery.

To find a specific module, go directly to its URL on the PowerShell Gallery:
http://www.powershellgallery.com/packages/< module name >
For example:
http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module -Name < module name >

For example:

Install-Module -Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

Update-Module

After installing modules, you can discover all DSC resources available to your local system with this command:

Get-DscResource

How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:
https://github.com/PowerShell/< module name >
For example, for the CertificateDsc module, go to:
https://github.com/PowerShell/CertificateDsc.

All DSC modules are also listed as submodules of the DscResources repository in the DscResources folder and the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:
https://github.com/PowerShell/< module name >/issues
For example:
https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Kragenbrink
Software Engineer
PowerShell DSC Team
@katiedsc (Twitter)
@kwirkykat (GitHub)

Version agnostic Management Packs

We have started releasing version agnostic management packs for our customers. In the past we used to release a new management pack when new version of Windows and Windows features will be available for monitoring. Customer will have to  install and manage different management packs for monitoring their workloads. Going forward the same management pack will work with difference Windows versions.

For example: Customer want to monitor Windows Server 2016 and Windows 1709, they will only require one management pack for this. Once we add support for Windows Server 2019, the same MP would be updated to support Windows 2019 and 2016.

We have updated many important management packs which monitor windows workloads to version agnostic. Please find the list here.

https://social.technet.microsoft.com/wiki/contents/articles/16174.microsoft-management-packs.aspx

We use a naming convention for our version agnostic management packs which is, minimum supported version and plus.

For eg, “Microsoft System Center Management Pack for Windows Server Operating System 2016 and 1709 Plus”.

Please use our feature Updates and Recommendation, which can help with :

  • Install the required management pack for monitoring a workload running on agent.
  • Recommend to update the management pack as soon as, newer version is available.

Thanks,

Neha

Viewing all 5932 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>