Quantcast
Channel: Category Name
Viewing all 5932 articles
Browse latest View live

What’s new in failover clustering: #06 Virtual machine start ordering

$
0
0

This post was authored by Subhasish Bhattacharya, Program Manager, Windows Server

Introduction: “Special” virtual machines

Not all virtual machines (VMs) in your production deployment are created equal… some are just special! Therefore, it is important for these “Utility” VMs to start up before other “Dependent” VMs in your private cloud. Consider a VM hosting the Domain Controller for your private cloud. It is imperative for that VM to start before any VM in your private cloud that depends on Active Directory.

Virtual machine priority in Windows Server

Today in Windows Server, VM start ordering is addressed by configuring the priority of VMs. VMs can be designated a Low, Medium or High priority. This ensures that the most important VMs are started first. Additionally, it is ensured that in the case of resource constraints, the most important VMs are running. However, there is no cross-node orchestration (by VM priority) across the nodes in a cluster. Each cluster node has an isolated view of the priority of the VMs it is hosting. Additionally, for VM start ordering based on priority, a VM is considered to be running once it reaches online state. This often does not provide a sufficient head start for its dependent VMs.

The need for virtual machine start ordering in your private cloud

Let us consider some scenarios to motivate the need for VM start ordering in our production deployments:

  1. A multi-tiered application where the database VMs have to start first, followed by the middle-tier VMs and lastly, the front-end VMs.
  2. In an integrated system, such as the Cloud Platform System, where infrastructure VMs (hosting services like Active Directory) need to start first. Next, application VMs (such as those hosting SQL) can start, followed by front-end VMs hosting management infrastructure.
  3. A hyper-converged cluster where storage utility VMs need to start before management and tenant VMs. A similar scenario exists for storage appliances.
  4. Converged clusters where at least one Domain Controller VM needs to start up before VMs hosting applications with Active Directory dependencies can be brought up.

Virtual machine start ordering

The virtual machine start ordering enhances your private cloud VM orchestration by providing the following:

Special VMs

•    VMs can be anointed as “Utility” VMs which are slated to start before all other VMs.

Orchestration

•    Groups of VMs can be defined to represent tiers.
•    Startup indicators and triggers are available for each VM group to determine when each VM group can be considered to be started.

Start ordering
•    Multi-layer dependencies can be created between different VM groups to define a start order.

Extending beyond VMs

Thus far in this blog post I have discussed the start ordering of VMs. However, this feature enables you to orchestrate the start ordering for any application represented as a cluster group (for example: a cluster group that is used to make your in-house application highly available)!

To try this new feature in Windows Server 2016, download the Technical Preview.

Check out the series:


Cloud platforms – the banks of the future

$
0
0

This post was authored by Adam Dodds, Research Director Channel Strategies, Alliances and Brokerage, IDC

Print

This is the fifth in a guest blog post series by IDC on trusted cloud. This post details the growing importance of cloud platforms for storing and protecting data.

Having had the opportunity to spend time with vendors, services providers, and organizations throughout the world, the universal theme that I keep hearing is the increasing recognition of information as a potentially strategic asset. This is an incredibly important discussion for an enterprise to be having as this factor must be acknowledged as a leading influence in the process of selecting a cloud provider.

The challenge is that recognizing information as a strategic asset is an executive-level opportunity (and not one for the CIO’s office alone) and it is a compelling reason for the executives of the business to become more involved in the digital journey. It is a reflection that digitization is creating new points of value in a business that need to be recognized and nurtured. A great example of the value of a business’s information is Caesars Casino where the customer loyalty data was valued at $1 billion (U.S.) in 2015. This example is convincing evidence that information has a tangible value, despite the financial rules that govern the treatment of information on the balance sheet as having yet to mature. The rising impact of information as a power currency is already being recognized at a business valuation level. The high market value of digital natives now achieving global scale is a real reflection of this change (e.g., Facebook or Uber).

Once the shift is made around information having a value, we can start thinking more about how we treat it, how we add value to it, how we protect it and, most importantly, where we put it from a cloud provider perspective.

The big question asked by most organizations is, “We know our information has a value, but how do we quantify that valuation?” The answer is not simple and the approaches to valuing information vary in the market. These approaches center around three common themes:

  1. How much revenue/profit could be attained if the information was sold?
  2. What would the cost be to the business if a competitor had the information?
  3. What would the financial impact be of not having the data available from a business operations perspective?

Answering these questions will provide an understanding of how – or if – information is valued in the organization, what data is valued, and where the emphasis on valuation should begin. By considering the context of the valuation, organizations can develop a tangible and rigorous methodology for identifying the approach to be taken.

So why is this process important? Because more mechanisms are needed to bring the business thinking into alignment with the potential of digital technology and services as business “goes digital.” CEOs and their boards are looking for alternatives to the traditional risk-based approach to determine how and when they should get involved.

With this in mind, we shift to the challenge of choosing a service delivery model for new information-based services and its relevance as a bank type structure. As with a bank, it is important that the service delivery provider balances the accessibility of the information with the need for robust security. Information is not physically tangible so there is an absolute need for organizations to pay close attention to their connection and engagement with their service delivery model. This is a crucial step in gaining customer trust regarding the stewardship of their information, and it should be treated no differently than safeguarding the organization’s cash in a bank. This principle should be adhered to irrespective of whether the datacenter is global or local. However, most organizations do not have the capacity or financial capability to provide bank-like service delivery. Indeed, implementing a delivery platform to bank-level would likely make an information value generation project unviable. What is becoming increasingly common is to use a provider of cloud services to build a new line of business on a cloud services platform.

Choosing a platform provider uses much the same set of criteria as when selecting a bank but with obvious differences. A set of important characteristics include:

  • Accessibility and transparency. How is the organization able to physically reference the provider’s facilities and systems and have visibility into their unique environment? This is an important step in the cloud platform adoption process.
  • Culture and brand. How aligned at the personnel level are the employees of both provider and enterprise? Does the engagement reflect the way in which the organization does business?
  • The type of relationship being built. Who owns any developed IP? New ecosystem models are being built as enterprises realize that they can leverage, with the use of new digital technologies and cloud native businesses, the IP and capabilities of aligned but non-competing specialist service providers. (See Walmart, Uber, Lyft.)
  • Provider viability. How likely is the provider to stay in the business of providing this platform? What is their development roadmap and do they have sufficient funding to maintain the R&D levels? Are they committed for the foreseeable future?
  • Cost. How transparent are the costs of the service and how focused is the provider in building a relationship for the long term? This can be seen through a real commitment to optimizing the customers’ environment both commercially and technically (see my comment above on relationship and ecosystem – a partner approach can be less risky but not a surety).
  • Location. Datacenter investment is aligned to a range of variables. Geographic stability, political stability, and power assurance remain top characteristics. Being local remains important in many geographies, especially where local legislation is used to drive local industry development. However, where local hosting is not available the ability to choose a location near country or globally offers a level of comfort where the connectivity can be relied upon.
  • Reliability. Providers should be able to clearly demonstrate the uptime of their environment. Architecting for resilience is important but this does not mean that you should design around the provider’s poor solution.
  • Compliance. Is the provider able to demonstrate an awareness of your business or your industry? Irrespective of whether data sovereignty or the industry itself has requirements around information availability, it is important to understand what is needed and where the risk lies (e.g., in health care this is critical especially regarding patient records).

As the value of an organization’s information grows to represent a significant proportion of a business, it is critical that the business becomes more involved in the decision-making process. There are too many cases where an organization has faced considerable business impact due to a data breach and, in hindsight, both the CEOs and the boards wished that they were more educated and involved in the assessment process (Sony and Target are two examples that come to mind). At a technical level the security, budgets, workload locations, and the shift to as-a-service models are driven largely by the CIO’s office. However, this is not the right answer any more as data and information shifts from a utility asset to an invaluable competitive differentiator asset for all organizations.

Valuing that asset has never been more critical. The sooner that organizations treat information as they would treat more liquid assets, the faster they will realize its potential value. The downstream effect of this change will be a closer inspection of the providers with whom they choose to entrust their information. Only those providers that are hyper aware of their responsibility over this information and strive for transparency to ensure that this trust is warranted will be successful.

To learn more, visit the Trusted Cloud website.

New in Intune: Conditional access for browsers, Dynamics CRM Online and Cisco ISE

$
0
0

With our latest Intune service update, we’re further expanding on our conditional access capabilities.

Conditional access allows you to manage access to corporate email, files and other resources based on customizable conditions that ensure security and compliance, including location, risk, user, device, and app compliance. As conditions shift, access policies which are defined by IT are triggered to ensure that your corporate data is protected. And all this is done without on-premises gateways or appliances.

Some of the enhancements in this release include:

Conditional access for browsers

Now, you can set a conditional access policy for Exchange Online and SharePoint Online, so that they can only be accessed from supported web browsers on managed and compliant iOS and Android devices. End users who try to sign in to Outlook Web Access (OWA) and SharePoint Online sites from unmanaged iOS and Android devices will be prompted to enroll their device with Intune as well as to fix any non-compliance issues before they can access their email and documents.

Conditional access for Dynamics CRM Online
Now, you can set a conditional access policy for Dynamics CRM Online, so that it can only be accessed by managed and compliant iOS and Android devices. End users who try to sign in to the Dynamics CRM mobile app on iOS and Android will be prompted to enroll with Intune as well as to remediate any non-compliance issues before the sign-in is complete. This is similar to what is already available for Exchange Online, SharePoint Online and Skype for Business Online.

Cisco ISE network access control policy for Intune
Customers who use the Cisco Identity Service Engine (ISE) 2.1 and also use Microsoft Intune can set a network access control policy in ISE that will ensure that only devices that are managed and compliant with Intune are allowed to connect to the network using WiFi or VPN. End users with noncompliant devices will be prompted to enroll and remediate any compliance issues to gain access to the network.

For more on these and other new features and improvements being rolled out in Intune, visit our What’s new in Microsoft Intune documentation page. For more information about new Hybrid (ConfigMgr connected with Intune) features, check out our Hybrid What’s New page.

#Azure AD Mailbag: Hybrid Identity and ADFS

$
0
0

Hey there, this is Ramiro Calderon from the Azure AD Customer Success Team. Remember us? We took a brief hiatus (sorry about that) but now we are back! I wanted to write up some answers to common AD FS questions we get in the context of hybrid identity with Azure AD. Let’s get into it.

 

Question 1:

I have multiple root domains in my on-premises environment and child domains on each of them (contoso.com, sales.contoso.com, fabrikam.com and procurement.fabrikam.com). How do I set the AD FS issuance rules for Azure AD?

Answer 1:

You can do this in three steps:

1. Register Root Domains First (contoso.com and fabrikam.com), so you share the federation configuration with all subdomains

2. If you have multiple root domains, you must use the –SupportsMultipleDomains when configuring the rules. This will create the base ADFS rules that generate the issuer claim

3. We are not done quite yet. Now, we have to make sure the child domains get the issuer consistent with the configuration above. All child domains on *.contoso.com should have the issuer of http://contoso.com/adfs/services/trust and all child domains on *.fabrikam.com have to have the issuer of http://fabrikam.com/adfs/services/trust. Replace the default rule generated for the issuer claim and replace it with custom rules (one for each root domain, as shown below).

image

 

Question 2:

I want to have a clean username for my users to sign in to Azure. What options do I have?

Answer 2:

This is a very common request for the following cases:

1. Users are not familiar with the UPN on premises, and confuse it with their email address. So, you want the users to use their email address

2. UserNames are arcane (example: internal employee ID), and the cloud offers an opportunity to clean up to a simpler login (e.g. john.smith@contoso.com)

3. Domains on premises have naming conventions you don’t want to carry through the cloud (for example, old merger and acquisition, geography, or other attributes such as external/contractors, etc.).

See the flowchart below to navigate some different options to achieve this:

image

 

Question 3:

What is the difference between Service Communication and SSL certificate? I noticed my service communication expired and things continue to work.

Answer 3:

The short answer: AD FS service communication certificate is used in a very specific subset of WS-Trust scenarios, which are not enabled by default and not used for Azure AD scenarios. Chances are you don’t need any of those and that’s why you did not noticed

Now, The long answer Smile :

ADFS (in general, WCF) supports three security modes: Transport, Message, and Mixed. The Service Communications (known also internally in WCF lingo as the “Service Identity”) is used in Message Security. Basically, the soap message is encrypted using key material derived from the certificate of the service (as in service communications). When using message security, ADFS generates and encrypts a symmetric key that can be only be decrypted by the owner of the private key of the service itself. This is why these endpoints can go over http (no SSL), since the payload itself is encrypted without depending on the channel.

Conceptually, this is similar to what TLS provides in the actual HTTP channel, and this is why we set both SSL and Service Communication to be the same during the initial configuration.

You can learn more about WCF security modes here and more about message security here.

 

Question 4:

How do I turn debug logs for AD FS?

Answer 4:

If you want to debug something (for example, the flow of claims as the rules execute through different stages), that requires you to turn on detailed traces, run the following commands:

image

Then, to see the traces in the event log, do the following

1. Enable “Show Analytic and debug logs”:

image

2. Then, Navigate to “AD FS Tracing/Debug” to get your debug logs

image

When you are done with your debugging or investigation, turn the log off:

image

Question 5:

Can I use a third party proxy, or Azure AD Application Proxy instead of WAP for AD FS endpoints?

Answer 5:

ADFS 2012 R2 requires the proxies to implement the protocol MS-ADFSPIP. Traditional endpoint proxies will not work out of the box.

Question 6:

What tools can I use to figure out if I have enough AD FS servers to handle my traffic?

Answer 6:

In general terms, AD FS is driven by CPU, so that would be the main metric to keep an eye on. Azure AD Connect Health provides a great view of the load profile of the AD FS servers.

If you are planning a net new deployment or you want to verify your planned capacity, check out our update AD FS capacity planning spreadsheet.

Question 7:

Where can I find a list with the available updates for AD FS?

Answer 7:

You can check the list with all the AD FS updates here to confirm your server is updated. For ongoing confirmation, Azure AD Connect Health, will alert you when new updates are available.

We hope you’ve found this post and this series to be helpful. For any questions you can reach us at
AskAzureADBlog@microsoft.com, the Microsoft Forums and on Twitter @AzureAD, @MarkMorow and @Alex_A_Simons

-Ramiro Calderon and Mark Morowczynski

KB: Event ID 33601 when you process an SLA workflow in Service Manager

$
0
0

When you are processing a Service Level Agreement (SLA) workflow in System Center 2012 R2 Service Manager (SCSM 2012 R2), the following error may be logged in the Operations Manager log:

Log Name: Operations Manager
Source: SMCMDB Subscription Data Source Module
Event ID: 33601
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: ComputerName
Description:
The database subscription configuration is not valid.
The following errors were encountered:
Exception message: Subscription configuration error. Error reading TargetType element. Error message: Invalid TargetType attribute specified. Target type id 9bc85fd0-934c-bfdb-9643-63779a0f3742 must be either abstract or first non abstract base type.
One or more subscriptions were affected by this.
Subscription name: WorkflowSubscription_9d183789_7944_49f2_b5fe_2d8f77ad6ddc
Instance name: SLA Workflow Target: DisplayName
Instance ID: {69CBC824-AA85-B123-58C3-A46F97E54BF7}
Management group: ManagementGroup

This can occur when the Service Level Objective (SLO) has been configured to use a derived class. For example, assume that you create a new class that is based on the Service Request class and that it is named SRNewClass. When you create a Service Level Objective and you select SRNewClass from the “Class” section on the General tab in the wizard, event ID 33601 is returned during the workflow process.

For complete details as well as a work around, see the following:

3171966Event ID 33601 when you process an SLA workflow in Service Manager (https://support.microsoft.com/en-us/kb/3171966)

 

J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This can occur when the Service Level Objective (SLO) has been configured to use a derived class. For example, assume that you create a new class that’s based on the Service Request class and that’s named SRNewClass. When you create a Service Level Objective, and then you select SRNewClass from the “Class” section on the General tab in the wizard, event ID 33601 is returned during the workflow process.

Windows Server 2016 new Current Branch for Business servicing option

$
0
0

We are excited to announce the official launch of Windows Server 2016 will be at the Ignite conference this Fall. We hope you can join us in Atlanta for the excitement! Windows Server 2016 is the cloud-ready operating system that delivers new layers of security and Azure-inspired innovation for the applications and infrastructure that power your business. New capabilitieswill help you:

  • Increase security and reduce business risk with multiple layers of protection built into the operating system.
  • Evolve your datacenter to save money and gain flexibility with software-defined datacenter technologies inspired by Microsoft Azure.
  • Innovate faster with an application platform optimized for the applications you run today, as well as the cloud-native apps of tomorrow.

Technical Preview 5 is our final preview prior to launch and is feature complete, so download it today and try out all the new features in Windows Server 2016. Deploy, manage and secure Windows Server 2016 with the upcoming release of System Center 2016.

Windows Server 2016 editions include:

  • Datacenter: This edition continues to deliver significant value for organizations that need unlimited virtualization along with powerful new features including Shielded Virtual Machines, software-defined storage and software-defined networking.
  • Standard: This edition is ideal for organizations that need limited virtualization but require a robust, general purpose server operating system.
  • Essentials: This edition is designed for smaller organizations with less than 50 users.

These editions will be available for purchase on the October 2016 price list. Details on pricing for Windows Server 2016 can be found here.

It’s also important to note that for the Standard and Datacenter editions, there are three installation options:

  • Server with Desktop Experience: The Server with Desktop Experience installation option (previously known as Server with a GUI) provides an ideal user experience for those who need to run an app that requires local UI or for Remote Desktop Services Host. This option has the full Windows client shell and experience, consistent with Windows 10 Anniversary edition Long Term Servicing Branch (LTSB), with the server Microsoft Management Console (MMC) and Server Manager tools available locally on the server.
  • Server Core: The Server Core installation option removes the client UI from the server, providing an installation that runs the majority of the roles and features on a lighter install. Server Core does not include MMC or Server Manager, which can be used remotely, but does include limited local graphical tools such as Task Manager as well as PowerShell for local or remote management.
  • Nano Server: The Nano Server installation option provides an ideal lightweight operating system to run “cloud-native” applications based on containers and micro-services. It can also be used to run an agile and cost-effective datacenter with a dramatically smaller OS footprint. Because it is a headless installation of the server operating system, management is done remotely via Core PowerShell, the web-based Server Management Tools (SMT), or existing remote management tools such as MMC.

Announcing servicing guidelines for Windows Server 2016

In prior releases, Windows Server has been serviced and supported with a “5+5” model meaning that there is 5 years of mainstream support and 5 years of extended support and this will continue with Windows Server 2016. Customers who choose to install full Windows Server 2016 with a desktop experience or Server Core will maintain this servicing experience, which will be known as the Long Term Servicing Branch (LTSB).

Customers choosing the Nano Server installation will opt into a more active servicing model similar to the experience with Windows 10. Specifically, these periodic releases are known as Current Branch for Business (CBB) releases. This approach supports customers who are moving at a “cloud cadence” of rapid development lifecycles and wish to innovate more quickly. Since this type of servicing continues to provide new features and functionality, Software Assurance is also required to deploy and operate Nano Server in production.

Installation Option

LTSB servicing model

CBB servicing model

Server with Desktop Experience

Yes

No

Server Core

Yes

No

Nano Server

No

Yes

Our goal is to provide feature updates approximately two or three times per year for Nano Server. The model will be similar to the Windows client servicing model, but we expect it to have some differences. While we share the same goal of delivering new and valuable technology to our customers rapidly, we understand that a server operating environment has unique requirements.

For example, while it will be necessary to stay current with new versions as they come out, the new versions will not auto-update a server. Instead, a manual installation will be performed by the admin when they choose. Because Nano Server will be updated on a more frequent basis, customers can be no more than two Nano Server CBB releases behind. Only two CBB releases will be serviced at any given time, therefore when the third Nano Server release comes out, you will need to move off of #1 as it will no longer be serviced. When #4 comes out, you will need to move off of #2, and so on.

Windows Server 2016 meets businesses and organizations where they are today, and introduces the innovation needed for the transition to cloud computing when ready. This release puts the power of choice in the hands of our customers, making Windows Server 2016 the perfect stepping stone to the cloud. We hope you join us for the launch at Ignite, and as always, we look forward to your feedback and suggestions as we continue to innovate in Windows Server.

Which Linux Integration Services should I use in my Linux VMs?

$
0
0

Overview
If you run Linux guest VMs on Hyper-V, you may wonder about how to get the “best” Linux Integration Services (LIS) for your Linux distribution and usage scenario.  Getting the “best” is a bit nuanced, so this blog post gives a detailed explanation to enable you to make the right choice for your situation.

Microsoft has two separate tracks for delivering LIS.  It’s important to understand that the tracks are separate, and don’t overlap with each other.  You have to decide which track works best for you.

“Built-in” LIS
One track is through the Linux distro vendors, such as Red Hat, SUSE, Oracle, Canonical, and the Debian community.  Developers from Microsoft and the Linux community at large submit LIS updates to the Linux Kernel Mailing List, and get code review feedback from the Linux community.  When the feedback process completes, the changes are incorporated into the upstream Linux kernel as maintained by Linus Torvalds and the Linux community “maintainers”.

After acceptance Microsoft works with the distro vendors to backport those changes into whatever Linux kernel the Linux distro vendors are shipping.  The distro vendors take the changes, then build, test, and ultimately ship LIS as part of their release.  Microsoft gets early versions of the releases, and we test as well and give feedback to the distro vendor.  Ultimately we converge at a point where we’re both happy with the release. We do this with Red Hat, SUSE, Canonical, Oracle, etc. and so this process covers RHEL, CentOS, SLES, Oracle Linux, and Ubuntu.  Microsoft also works with the Debian community to accomplish the same thing.

This track is what our documentation refers to as “built-in”.  You get LIS from the distro vendor as part of the distro release.  And if you upgrade from CentOS 7.0 to 7.1, you’ll get updated LIS with the 7.1 update, just like any other Linux kernel updates.  Same from 7.1 to 7.2. This track is the easiest track, because you don’t do anything special or extra for LIS – it’s just part of the distro release.  It’s important to note that we don’t assign a version number to the LIS that you get this way.  The specific set of LIS changes that you get depends on exactly when the distro vendor pulled the latest updates from the upstream Linux kernel, what they were able to include (they often don’t include every change due to the risk of destabilizing), and various other factors.  The tradeoff with the “built-in” approach is that you won’t always have the “latest and greatest” LIS code because each distro release is a snapshot in time.  You can upgrade to a later distro version, and, for example, CentOS 7.2 will be a later snapshot than CentOS 7.1.  But there are inherent delays in the process.  Distro vendors have freeze dates well in advance of a release so they can test and stabilize.  And, CentOS, in particular, depends on the equivalent RHEL release.

End customer support for “built-in” LIS is via your Linux distro vendor under the terms of the support agreement you have with that vendor.  Microsoft customer support will also engage under the terms of your support agreement for Hyper-V.   In either case, fixing an actual bug in the LIS code will likely be done jointly by Microsoft and the distro vendor.  Delivery of such updated code will come via your distro vendor’s normal update processes.

Microsoft LIS Package
The other track is the Microsoft-provided LIS package, which is available for RHEL, CentOS, and the Red Hat Compatible Kernel in Oracle Linux.  LIS is still undergoing a moderate rate of change as we make performance improvements, handle new things in Azure, and support the Windows Server 2016 release with a new version of Hyper-V.  As an alternative to the “built-in” LIS described above, Microsoft provides an LIS package that is the “latest and greatest” code changes.  We provide this package backported to a variety of older RHEL and CentOS distro versions so that customers who don’t stay up-to-date with the latest version from a distro vendor can still get LIS performance improvements, bug fixes, etc.   And without the need to work through the distro vendor, the Microsoft package has shorter process delays and can be more “up-to-date”.   But note that over time, everything in the Microsoft LIS package shows up in a distro release as part of the “built-in” LIS.  The Microsoft package exists only to reduce the time delay, and to provide LIS improvements to older distro versions without having to upgrade the distro version.

The Microsoft-provided LIS packages are assigned version numbers.  That’s the LIS 4.0, 4.1 (and the older 3.5) that you see in the version grids in the documentation, with a link to the place you can download it.  Make sure you get the latest version, and ensure that it is applicable to the version of RHEL/CentOS that you are running, per the grids.

The tradeoff with the Microsoft LIS package is that we have to build it for specific Linux kernel versions.  When you update a CentOS 7.0 to 7.1, or 7.1 to 7.2, you get changes to the kernel from CentOS update repos.  But you don’t get the Microsoft LIS package updates because they are separate.  You have to do a separate upgrade of the Microsoft LIS package.  If you do the CentOS update, but not the Microsoft LIS package update, you may get a binary mismatch in the Linux kernel, and in the worst case, you won’t be able to boot.  The result is that you have extra update steps if you use the Microsoft provided LIS package.  Also, if you are using a RHEL release with support through a Red Hat subscription, the Microsoft LIS package constitutes “uncertified drivers” from Red Hat’s standpoint.  Your support services under a Red Hat subscription are governed by Red Hat’s “uncertified drivers” statement here:  Red Hat Knowledgebase 1067.

Microsoft provides end customer support for the latest version of the Microsoft provided LIS package, under the terms of your support agreement for Hyper-V.  If you are running other than the latest version of the LIS package, we’ll probably ask you to upgrade to the latest and see if the problem still occurs.  Because LIS is mostly Linux drivers that run in the Linux kernel, any fixes the Microsoft provides will likely be as a new version of the Microsoft LIS package, rather than as a “hotfix” to an existing version.

Bottom-line
In most cases, using the built-in drivers that come with your Linux distro release is the best approach, particularly if you are staying up-to-date with the latest minor version releases.  You should use the Microsoft provided LIS package only if you need to run an older distro version that isn’t being updated by the distro vendor.  You can also run the Microsoft LIS package if you want to be running the latest-and-greatest LIS code to get the best performance, or if you need new functionality that hasn’t yet flowed into a released distro version.  Also, in some cases, when debugging an LIS problem, we might ask you to try the Microsoft LIS package in order to see if a problem is already fixed in code that is later than what is “built-in” to your distro version.

Here’s a tabular view of the two approaches, and the tradeoffs:

Feature/Aspect“Built-in” LISMicrosoft LIS package
Version NumberNo version number assigned.  Don’t try to compare with the “4.0”, “4.1”, etc. version numbers assigned to the Microsoft LIS packageLIS 4.0, 4.1, etc.
How up to date?Snapshot as of the code deadline for the distro versionMost up-to-date because released directly by Microsoft
Update processAutomatically updated as part of the distro update processRequires a separate step to update the Microsoft LIS package.  Bad things can happen if you don’t do this extra step.
Can get latest LIS updates for older distro versions?No.  Only path forward is to upgrade to the latest minor version of the distro (6.8, or 7.2, for CentOS)Yes.  Available for a wide range of RHEL/CentOS versions back to RHEL/CentOS 5.2.  See this documentation for details on functionality and limitations for older RHEL/CentOS versions.
Meets distro vendor criteria for support?YesNo, for RHEL.  Considered “uncertified drivers” by Red Hat.  Not an issue for CentOS, which has community support.
End customer support processVia your distro vendor, or via Microsoft support.  LIS fixes delivered by distro vendor normal update processes.Via Microsoft support per your Hyper-V support agreement.  Fixes delivered as a new version of the Microsoft LIS package.

 

System Center 2016 to launch in September

$
0
0

We are pleased to announce that Microsoft System Center 2016 will be launched at the Microsoft Ignite conference in late September. System Center makes it possible for you to run your IT operations at higher scale and drive more value for your business. System Center 2016 brings a new set of capabilities that integrate with our cloud management tools to help you manage the challenges of moving to the cloud. This release also unlocks new technologies available in Windows Server 2016 that will enhance the software-defined datacenter and provide new layers of security for your operating system.

With the launch of System Center 2016 and Windows Server 2016 in September, you will have a cloud-ready platform and the operations management tools you need to run a secure, efficient, and responsive datacenter. At Ignite, you’ll find an array of sessions to give you the latest updates on System Center and how to take advantage of these new capabilities.

Highlights of System Center 2016 include:

  • Support for new Windows Server 2016 technologies, including lifecycle management for Nano server-based hosts and virtual machines, Storage Spaces Direct, and shielded virtual machines
  • Performance and usability improvements, including all the update rollups since System Center 2012 R2, improved UNIX and Linux monitoring, and ability to tune management packs and alerts
  • Native integrations with Microsoft Operations Management Suite to give you expanded analytics, data correlation, orchestration, archival, and hybrid management capabilities

You can download System Center 2016 Technical Preview 5 now to get started. See more of What’s New in System Center 2016 and Windows Server 2016.

System Center is a key part of the Microsoft hybrid cloud management strategy. To make it easier to access the value of System Center and the Operations Management Suite, you can now take advantage of a new subscription option. As customers consider their management tools, we believe that the cloud-based capabilities of Operations Management Suite, and the new subscription model which includes System Center, offer a great combination that will make it easier to transition to the cloud.

We look forward to seeing many of you in Atlanta!

Follow us on Twitter @MSCloudMgmt.


Determining the Dominant User and Setting the ManagedBy Computer Attribute

$
0
0

Hi again, this is Stephen Mathews and I’m here to talk about how to determine the dominant or primary user of a Windows operating system. This insight can help administrators facilitate direct communication with the affected user when a system needs management, and can even help non-enterprise users, such as a parent questioning which child is using their computer the most.

We’ll consider the different types of login data available, show how to expose it in the various OS instruments, and then use that information to update the system’s Active Directory computer object ‘ManagedBy’ attribute.

This post uses PowerShell Version 5 on Windows 10 to illustrate examples and it references settings that may not exist in legacy OS versions. All code examples are for illustration purposes only and should be thoroughly tested in a non-production environment. This post is intended to be used within a client OS using its built-in capabilities. Additionally, it is written from an Asset Tracking perspective and is not directly addressing Security and/or User Auditing concerns.

What type of information are you after?

Are you looking for the currently logged in user on the console, remotely logged in users, the last logged in user, the dominant user, or a list of all users?

How will the information be used: for real-time troubleshooting, historical reference, or external app consumption?

Will you script a solution, if so where will you output the data? Will the script be run manually or automatically, if automatically will you use a startup or login script, or a scheduled task?

File System

The filesystem can be the quickest and most efficient way to determine the regular users of a system. By expanding the ‘UserProfile’ environmental variable’s parent directory, you can see users that have had a profile created on that machine. You can check these profile directories and see the Created, Accessed, Modified, and Written timestamps for all of the systems’ users.

Unfortunately, this can also be the most misleading. The user profile directories are mapped via a Security Identifier (SID) that is stored in the registry. If a user account’s logon name is changed, they will still map to the original folder name. Also, if a user’s profile is corrupted they may not get a local directory and be redirected to the default profile. Additionally, the timestamps may not be updating depending on your OS version and/or auditing settings for those folders. And finally, you may not have rights to see the folders or their attributes.

  • Useful for: All Users, Last Logged On (Last access time), Dominant User (Timespan between created and last access time)
  • Get-ChildItem -Path (Get-Item -Path $env:USERPROFILE).PSParentPath | Select-Object -Property Name,*time*

Registry

The registry contains all the configurations and settings for the OS. There are multiple locations in the registry to find specific information about the users. User accounts are usually stored as SIDs inside the registry and will need to be converted into account names.

  • You can resolve a SID directly inside PowerShell which you’ll see later. You can see additional examples of this in Working with SIDs. This code will be worked into a customized Select-Object property hash-table; you can read about that in Using Calculated Properties.
  • (New-Object -TypeName System.Security.Principal.SecurityIdentifier(“S-1-5-18”)).Translate([System.Security.Principal.NTAccount])

  • $Object | Select-Object -Property SID,@{Name=”Account”;Expression={((New-Object -TypeName System.Security.Principal.SecurityIdentifier($_.SID)).Translate([System.Security.Principal.NTAccount])).Value}}

The registry is a sensitive part of the OS and can be corrupted. This risk of corruption leads many organizations to strictly limit and audit registry access. Certain registry settings may change only during startup and/or login, meaning the data may be stale while it’s being queried.

  • Useful for: All Users
  • Get-ChildItem -Path “HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\” -Recurse

  • Useful for: Currently logged on users:
  • reg query HKU

  • Get-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Control\hivelist

WMI

Windows Management Instrumentation is an infrastructure that exposes the OS to management. You can find current configuration settings, then get and set those properties.

WMI queries can be difficult to construct and may be resource intensive to the point of resource exhaustion. Take precautions to test the retrieving and setting of WMI components in a test environment before using inside production. Access to WMI may be restricted and audited for the same reasons as the registry.

  • Useful for: All Users, Last Logged On (Last use time), Currently Logged On (Loaded)
  • Get-WmiObject -Class Win32_UserProfile | Select-Object -Property SID,LocalPath,Loaded,LastUseTime,@{Name=”Account”;Expression={((New-Object -TypeName System.Security.Principal.SecurityIdentifier($_.Name)).Translate([System.Security.Principal.NTAccount])).Value}}

  • Useful for: Currently Logged On (Console)
  • Get-WmiObject -Class Win32_ComputerSystem | Select-Object -Property *ername*

ADSystemInfo

This is an overlooked tool to identify current system Active Directory network settings. The ‘UserName’ property will report the currently logged in user. It requires network connectivity to return settings and there’s no inherent way to run it remotely.

  • Useful for: Currently Logged On (Console)
  • $ADSystemProps = @(“ComputerName”,”DomainDNSName”,”DomainShortName”,”ForestDNSName”,”IsNativeMode”,”PDCRoleOwner”,”SchemaRoleOwner”,”SiteName”,”UserName”)

    $ADSystemInfo = New-Object -ComObject “ADSystemInfo”

    foreach ($ADSystemProp in $ADSystemProps) {

    $Value = $ADSystemInfo.GetType().InvokeMember($ADSystemProp, “GetProperty”, $Null, $ADSystemInfo, $Null)

    $ADSystemInfo | Add-Member -MemberType NoteProperty -Name $ADSystemProp -Value $Value -Force

    }

    $ADSystemInfo

Event Logs

Event logs are the record keepers of all activities on the system. As such they are the definitive source for tracking the login process. Logging can be turned on or off per provider and the logging level can be tailored based upon the event type: Critical, Error, Warning, Information, and Verbose. The ‘UserID’ property is typically set to the SID of the account creating the event, this is automatically translated for you in the Event Viewer. If the individual Event Log does not populate the ‘UserID’ property, you can parse the event message text with a SID to find events.

  • Get-WinEvent -LogName “Microsoft-Windows-GroupPolicy/Operational” | Select-Object -First 1 -Property *


  • Get-WinEvent -LogName Security | Select-Object -First 1 -Property *

The event log filter can be difficult to configure and a poorly created filter may be resource intensive to the point of resource exhaustion. Access to certain logs may be restricted and not all event logs record the same information in their properties. Furthermore, the logs may be collected into a central repository, making them unavailable or lacking significant detail to make an accurate determination.

In the first example it uses the Group Policy Operational log and groups the events by ‘UserID’, the second example events do not populate the ‘UserID’ property and need the message data to be parsed for matching SIDs; the list of SIDs were defined from the Win32_UserProfiles class.

  • Useful for: All Users, Dominant User (Count)
  • Get-WinEvent -LogName “Microsoft-Windows-GroupPolicy/Operational” | Group-Object -Property UserID | Sort-Object -Property Count -Descending | Select-Object -Property Count,Name,@{Name=”Account”;Expression={((New-Object -TypeName System.Security.Principal.SecurityIdentifier($_.Name)).Translate([System.Security.Principal.NTAccount])).Value}}

  • $SecurityEvents = Get-WinEvent -FilterHashTable @{LogName=”Security”;ID=4624}

    $WMIUserProfiles = Get-WmiObject -Class Win32_UserProfile

    foreach ($WMIUser in $WMIUserProfiles) {

    $WMIUser | Add-Member -MemberType NoteProperty -Name Account -Value ((New-Object -TypeName System.Security.Principal.SecurityIdentifier($WMIUser.SID)).Translate([System.Security.Principal.NTAccount])).Value

    $WMIUser | Add-Member -MemberType NoteProperty -Name Events -Value ($SecurityEvents | Where-Object {($_.Properties).Value -contains $WMIUser.SID}).Count

    $WMIUser | Select-Object -Property Events,Account,SID

    }

System Center Configuration Manager

For those of you with SCCM, it does the hard work for you in its Asset Intelligence feature set. Click to read more about the SMS_SystemConsoleUser Client WMI Class that calculates the dominant user for you. Here are a couple of screen shots.


Using the information

In this example, we want to update the Active Directory computer object ‘ManagedBy’ attribute with the dominant user. In order for this to happen we have to edit the default permissions to that attribute in the Organizational Unit where the computer object resides. Step two utilizes a script to perform the update, there are easier ways to do this, however we want to utilize a process that is as intrinsic as possible to the OS.

  • On the OU where the computer objects are, add permissions for SELF for Descendent Computer objects and select “Write ManagedBy”.
  • #Create the script below and feed it the ‘UserName’ determined from the above solutions
    $DomUser = “UserName”

    #Set Filter strings for locating objects in AD
    $strComputerFilter = “(&(objectCategory=Computer)(name=” + $env:COMPUTERNAME + “))” #get current computer name from environment variable
    $strFilter = “(&(objectCategory=User)(samaccountName=$DomUser))” #username set to $DomUser defined above

    $objDomain = New-Object System.DirectoryServices.DirectoryEntry
    $objSearcher = New-Object System.DirectoryServices.DirectorySearcher
    $objSearcher.SearchRoot = $objDomain
    $objSearcher.PageSize = 1000
    $objSearcher.Filter = $strFilter
    $objSearcher.SearchScope = “Subtree”

    #find LDAP path for User
    $ADUser = $objSearcher.FindAll()

    #create PowerShell ADSI object for User
    $ADSIUser=[ADSI]$ADUser.path

    #find LDAP path for Computer
    $objSearcher.Filter = $strComputerFilter
    $computer = $objSearcher.FindAll()

    #create PowerShell ADSI object for Computer
    $ADComputer=[ADSI]$computer.path

    #set attributes on computer AD object
    $ADComputer.managedby = $ADSIUser.distinguishedname
    #$ADComputer.employeeid = $ADSIUser.employeeID
    $ADComputer.setinfo()

  • Then configure the scheduled task to run as System with Highest Priority.



In closing, I hope this explains the different types of login information that can be collected, exposes those information locations for you to use, and inspires you to keep track of your assets. A special thanks to Mike Kanofsky who created the script and found the permissions required to update the ‘ManagedBy’ computer object attribute and to Kevin Kasalonis for his SCCM expertise. Thanks for reading!

MSRT July 2016 – Cerber ransomware

$
0
0

As part of our ongoing effort to provide better malware protection, the July 2016 release of the Microsoft Malicious Software Removal Tool (MSRT) includes detection for Win32/Cerber, a prevalent ransomware family. The inclusion in MSRT complements our Cerber-specific family detections in Windows Defender, and our ransomware-dedicated cloud protection features.

We started seeing Cerber in February 2016, and since then it has continuously evolved and is now one of the most encountered ransomware families – beating both Exxroute and Locky. The evolution is mostly based around the way in which Cerber is being distributed – with a focus on exploit kits, compromised websites, and email distribution.

When looking at data for the past 30 days, Cerber is the most detected ransomware, taking over a quarter of all ransomware infections.

Ransomware familyShare
Cerber25.97%
Exxroute15.39%
Locky12.80%
Brolo11.66%
Crowti9.97%
FakeBsod9.19%
Teerac3.94%
Critroni3.72%
Reveton2.86%
Troldesh1.21%
Ranscrape1.18%
Sarento0.76%
Urausy0.70%
Genasom0.65%

 

Cerber is especially prevalent in the US, Asia, and Western Europe.

However, infections occur across the globe, and the following heat map demonstrates the geographical spread of infected machines:
Map showing highlighted areas in Eastern US, Western Europe, Asia, South America

 

Cerber infection chain

Cerber can enter your system or PC either through downloaders from spam email or exploits on malicious or compromised sites.

Diagram showing spam email using macro and scripts to install cerber onto a PC

When delivered via spam, we’ve seen the use of both macros and OLE objects to deliver Cerber. We described how malware authors can maliciously use OLE in our blog “Where’s the macro?“, and we’ve previously talked about how macros have been used to deliver malware (although new features in Office 2016 has seen a decrease in macro-based malware).

In this case, we’ve seen malicious files using VisualBasic Script (VBS) and JavaScript to download Cerber from a command and control (C2) server. We’ve also seen malicious macros both downloading Cerber, and dropping VBS scripts that then download Cerber.

The other infection vector – exploit kits – occurs when a user visits a malicious or compromised website that hosts an exploit kit. The exploit kit checks for vulnerabilities on the PC, and tailors an infection to target those vulnerabilities. This allows the exploit kit to download Cerber onto the PC.

Neutrino, Angler, and Magnitude exploit kits have been identified as distributing Cerber.

 

Cerber updates

As with most other encryption ransomware, Cerber encrypts files and places “recovery” instructions in each folder. Cerber provides the instructions both as .html and .txt formats, and replaces the desktop wallpaper.

Cerber, however, also includes a synthesized audio message.

We described the Cerber infection process in detail in our blog “The three heads of the Cerberus-like Cerber ransomware“.

 

Screencap showing a long note explaining how a user was infectedThere have been some updates to this family, however, including a much more detailed description of how ransomware encryption works, and how users can recover their files.

Note that the ransom message now makes claims about Cerber attempting to help make the Internet a safer place, and they don’t mention the payment of fees or ransom to decrypt your files.

Upon investigation, however, we have determined (as of July 8, 2016) that they are asking for a ransom in the form of bitcoins, as shown in the following screenshot of the Tor webpage:

Note showing that Cerber is request bitcoin payment to decrypt files

 

The Cerber desktop wallpaper has also been updated:

Grey wallpaper with a few lines of black text showing links to decrypt files

 

Prevention

To help stay protected:

  • Keep your Windows Operating System and antivirus up-to-date and, if you haven’t already, upgrade to Windows 10.
  • Regularly back-up your files in an external hard-drive
  • Download and apply security patches associated with the exploit kits that are known to distribute this ransomware (for example: Neutrino).
  • Enable file history or system protection. On Windows 10 and Windows 8.1, set up a drive for file history
  • Use OneDrive for Business
  • Beware of phishing emails, spams, and clicking malicious attachment
  • Use Microsoft Edge to get SmartScreen protection. It can help warn you about sites that are known to be hosting exploits, and help protect you from socially-engineered attacks such as phishing and malware downloads.
  • Disable the loading of macros in your Office programs
  • Disable your Remote Desktop feature whenever possible
  • Use two factor authentication
  • Use a safe Internet connection
  • Avoid browsing web sites that are known for being malware breeding grounds (such as illegal music, movies and TV, and software download sites)

Detection

Recovery

In the Office 365 blog “How to deal with ransomware“, there are several options on how you might be able to remediate or recover from a ransomware attack, including backup and recovery using File History in Windows 10 and System Restore in Windows 7.

You can also use OneDrive and SharePoint to backup and restore your files:

 

Carmen Liang and Patrick Estavillo MMPC

 

Determining the Dominant User and Setting the ManagedBy Computer Attribute

$
0
0

Hi again, this is Stephen Mathews and I’m here to talk about how to determine the dominant or primary user of a Windows operating system. This insight can help administrators facilitate direct communication with the affected user when a system needs management, and can even help non-enterprise users, such as a parent questioning which child is using their computer the most.

We’ll consider the different types of login data available, show how to expose it in the various OS instruments, and then use that information to update the system’s Active Directory computer object ‘ManagedBy’ attribute.

This post uses PowerShell Version 5 on Windows 10 to illustrate examples and it references settings that may not exist in legacy OS versions. All code examples are for illustration purposes only and should be thoroughly tested in a non-production environment. This post is intended to be used within a client OS using its built-in capabilities. Additionally, it is written from an Asset Tracking perspective and is not directly addressing Security and/or User Auditing concerns.

What type of information are you after?

Are you looking for the currently logged in user on the console, remotely logged in users, the last logged in user, the dominant user, or a list of all users?

How will the information be used: for real-time troubleshooting, historical reference, or external app consumption?

Will you script a solution, if so where will you output the data? Will the script be run manually or automatically, if automatically will you use a startup or login script, or a scheduled task?

File System

The filesystem can be the quickest and most efficient way to determine the regular users of a system. By expanding the ‘UserProfile’ environmental variable’s parent directory, you can see users that have had a profile created on that machine. You can check these profile directories and see the Created, Accessed, Modified, and Written timestamps for all of the systems’ users.

Unfortunately, this can also be the most misleading. The user profile directories are mapped via a Security Identifier (SID) that is stored in the registry. If a user account’s logon name is changed, they will still map to the original folder name. Also, if a user’s profile is corrupted they may not get a local directory and be redirected to the default profile. Additionally, the timestamps may not be updating depending on your OS version and/or auditing settings for those folders. And finally, you may not have rights to see the folders or their attributes.

  • Useful for: All Users, Last Logged On (Last access time), Dominant User (Timespan between created and last access time)
  • Get-ChildItem -Path (Get-Item -Path $env:USERPROFILE).PSParentPath | Select-Object -Property Name,*time*

Registry

The registry contains all the configurations and settings for the OS. There are multiple locations in the registry to find specific information about the users. User accounts are usually stored as SIDs inside the registry and will need to be converted into account names.

  • You can resolve a SID directly inside PowerShell which you’ll see later. You can see additional examples of this in Working with SIDs. This code will be worked into a customized Select-Object property hash-table; you can read about that in Using Calculated Properties.
  • (New-Object -TypeName System.Security.Principal.SecurityIdentifier(“S-1-5-18”)).Translate([System.Security.Principal.NTAccount])

  • $Object | Select-Object -Property SID,@{Name=”Account”;Expression={((New-Object -TypeName System.Security.Principal.SecurityIdentifier($_.SID)).Translate([System.Security.Principal.NTAccount])).Value}}

The registry is a sensitive part of the OS and can be corrupted. This risk of corruption leads many organizations to strictly limit and audit registry access. Certain registry settings may change only during startup and/or login, meaning the data may be stale while it’s being queried.

  • Useful for: All Users
  • Get-ChildItem -Path “HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\” -Recurse

  • Useful for: Currently logged on users:
  • reg query HKU

  • Get-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Control\hivelist

WMI

Windows Management Instrumentation is an infrastructure that exposes the OS to management. You can find current configuration settings, then get and set those properties.

WMI queries can be difficult to construct and may be resource intensive to the point of resource exhaustion. Take precautions to test the retrieving and setting of WMI components in a test environment before using inside production. Access to WMI may be restricted and audited for the same reasons as the registry.

  • Useful for: All Users, Last Logged On (Last use time), Currently Logged On (Loaded)
  • Get-WmiObject -Class Win32_UserProfile | Select-Object -Property SID,LocalPath,Loaded,LastUseTime,@{Name=”Account”;Expression={((New-Object -TypeName System.Security.Principal.SecurityIdentifier($_.Name)).Translate([System.Security.Principal.NTAccount])).Value}}

  • Useful for: Currently Logged On (Console)
  • Get-WmiObject -Class Win32_ComputerSystem | Select-Object -Property *ername*

ADSystemInfo

This is an overlooked tool to identify current system Active Directory network settings. The ‘UserName’ property will report the currently logged in user. It requires network connectivity to return settings and there’s no inherent way to run it remotely.

  • Useful for: Currently Logged On (Console)
  • $ADSystemProps = @(“ComputerName”,”DomainDNSName”,”DomainShortName”,”ForestDNSName”,”IsNativeMode”,”PDCRoleOwner”,”SchemaRoleOwner”,”SiteName”,”UserName”)

    $ADSystemInfo = New-Object -ComObject “ADSystemInfo”

    foreach ($ADSystemProp in $ADSystemProps) {

    $Value = $ADSystemInfo.GetType().InvokeMember($ADSystemProp, “GetProperty”, $Null, $ADSystemInfo, $Null)

    $ADSystemInfo | Add-Member -MemberType NoteProperty -Name $ADSystemProp -Value $Value -Force

    }

    $ADSystemInfo

Event Logs

Event logs are the record keepers of all activities on the system. As such they are the definitive source for tracking the login process. Logging can be turned on or off per provider and the logging level can be tailored based upon the event type: Critical, Error, Warning, Information, and Verbose. The ‘UserID’ property is typically set to the SID of the account creating the event, this is automatically translated for you in the Event Viewer. If the individual Event Log does not populate the ‘UserID’ property, you can parse the event message text with a SID to find events.

  • Get-WinEvent -LogName “Microsoft-Windows-GroupPolicy/Operational” | Select-Object -First 1 -Property *


  • Get-WinEvent -LogName Security | Select-Object -First 1 -Property *

The event log filter can be difficult to configure and a poorly created filter may be resource intensive to the point of resource exhaustion. Access to certain logs may be restricted and not all event logs record the same information in their properties. Furthermore, the logs may be collected into a central repository, making them unavailable or lacking significant detail to make an accurate determination.

In the first example it uses the Group Policy Operational log and groups the events by ‘UserID’, the second example events do not populate the ‘UserID’ property and need the message data to be parsed for matching SIDs; the list of SIDs were defined from the Win32_UserProfiles class.

  • Useful for: All Users, Dominant User (Count)
  • Get-WinEvent -LogName “Microsoft-Windows-GroupPolicy/Operational” | Group-Object -Property UserID | Sort-Object -Property Count -Descending | Select-Object -Property Count,Name,@{Name=”Account”;Expression={((New-Object -TypeName System.Security.Principal.SecurityIdentifier($_.Name)).Translate([System.Security.Principal.NTAccount])).Value}}

  • $SecurityEvents = Get-WinEvent -FilterHashTable @{LogName=”Security”;ID=4624}

    $WMIUserProfiles = Get-WmiObject -Class Win32_UserProfile

    foreach ($WMIUser in $WMIUserProfiles) {

    $WMIUser | Add-Member -MemberType NoteProperty -Name Account -Value ((New-Object -TypeName System.Security.Principal.SecurityIdentifier($WMIUser.SID)).Translate([System.Security.Principal.NTAccount])).Value

    $WMIUser | Add-Member -MemberType NoteProperty -Name Events -Value ($SecurityEvents | Where-Object {($_.Properties).Value -contains $WMIUser.SID}).Count

    $WMIUser | Select-Object -Property Events,Account,SID

    }

System Center Configuration Manager

For those of you with SCCM, it does the hard work for you in its Asset Intelligence feature set. Click to read more about the SMS_SystemConsoleUser Client WMI Class that calculates the dominant user for you. Here are a couple of screen shots.


Using the information

In this example, we want to update the Active Directory computer object ‘ManagedBy’ attribute with the dominant user. In order for this to happen we have to edit the default permissions to that attribute in the Organizational Unit where the computer object resides. Step two utilizes a script to perform the update, there are easier ways to do this, however we want to utilize a process that is as intrinsic as possible to the OS.

  • On the OU where the computer objects are, add permissions for SELF for Descendent Computer objects and select “Write ManagedBy”.
  • #Create the script below and feed it the ‘UserName’ determined from the above solutions
    $DomUser = “UserName”

    #Set Filter strings for locating objects in AD
    $strComputerFilter = “(&(objectCategory=Computer)(name=” + $env:COMPUTERNAME + “))” #get current computer name from environment variable
    $strFilter = “(&(objectCategory=User)(samaccountName=$DomUser))” #username set to $DomUser defined above

    $objDomain = New-Object System.DirectoryServices.DirectoryEntry
    $objSearcher = New-Object System.DirectoryServices.DirectorySearcher
    $objSearcher.SearchRoot = $objDomain
    $objSearcher.PageSize = 1000
    $objSearcher.Filter = $strFilter
    $objSearcher.SearchScope = “Subtree”

    #find LDAP path for User
    $ADUser = $objSearcher.FindAll()

    #create PowerShell ADSI object for User
    $ADSIUser=[ADSI]$ADUser.path

    #find LDAP path for Computer
    $objSearcher.Filter = $strComputerFilter
    $computer = $objSearcher.FindAll()

    #create PowerShell ADSI object for Computer
    $ADComputer=[ADSI]$computer.path

    #set attributes on computer AD object
    $ADComputer.managedby = $ADSIUser.distinguishedname
    #$ADComputer.employeeid = $ADSIUser.employeeID
    $ADComputer.setinfo()

  • Then configure the scheduled task to run as System with Highest Priority.



In closing, I hope this explains the different types of login information that can be collected, exposes those information locations for you to use, and inspires you to keep track of your assets. A special thanks to Mike Kanofsky who created the script and found the permissions required to update the ‘ManagedBy’ computer object attribute and to Kevin Kasalonis for his SCCM expertise. Thanks for reading!

References:

SMS_SystemConsoleUser Client WMI Class

https://msdn.microsoft.com/en-us/library/cc143513.aspx

Troldesh ransomware influenced by (the) Da Vinci code

$
0
0

We at the MMPC are constantly tracking new and emerging ransomware threats so we can be one step ahead of active campaigns and help protect our users. As part of these efforts, we recently came across a new variant of the Win32/Troldesh ransomware family.

Ransomware, like most malware, is constantly trying to change itself in an attempt to evade detection. In this case, we’ve seen the following updates to Troldesh:

  • Tor functionality
  • Glyph/symbol errors on the wallpaper ransom note
  • Modified extension names for encrypted files
  • New malware being delivered (Trojan:Win32/Mexar.A)
  • Updates the ransom note to cover the Tor functionality

The biggest change in this update is the addition of Tor links. Using Tor addresses as the ransom payment method (as opposed to standard www addresses) is the current fashion among ransomware.

The ransom note now includes links to the Tor address (previously, the only method provided for obtaining decryption was an email address):

The ransom note now includes onion.to addresses for payment

However, upon investigation it appears that Tor has blocked the address:

Screenshot showing that the Troldesh payment site has been blocked by Tor

Errors have been introduced into the image that replaces the user’s desktop wallpaper (this occurred to several samples, but not all):

Errors and unknown symbols have been seen in some versions of the wallpaper - the symbols look like blank boxes and random characters

After encryption, Troldesh changes the file’s extension. In the latest update, we’ve seen it use the following strings:

  • .da_vinci_code
  • .magic_software_syndicate

For example, an encrypted file might appear as follows:

A file name that is a series of random characters and ends in .da_vinci_code

The list of file types that Troldesh encrypts has also increased – see the Win32/Troldesh description for a full list.

Prevention

To help stay protected:

  • Keep your Windows Operating System and antivirus up-to-date and, if you haven’t already, upgrade to Windows 10.
  • Regularly back-up your files in an external hard-drive
  • Enable file history or system protection. On Windows 10 and Windows 8.1, set up a drive for file history
  • Use OneDrive for Business
  • Beware of phishing emails, spams, and clicking malicious attachment
  • Use Microsoft Edge to get SmartScreen protection. It can help warn you about sites that are known to be hosting exploits, and help protect you from socially-engineered attacks such as phishing and malware downloads.
  • Disable the loading of macros in your Office programs
  • Disable your Remote Desktop feature whenever possible
  • Use two factor authentication
  • Use a safe Internet connection
  • Avoid browsing web sites that are known for being malware breeding grounds (such as illegal music, movies and TV, and software download sites)

Detection

Recovery

In the Office 365 “How to deal with ransomware” blog, there are several options on how you might be able to remediate or recover from a ransomware attack, including backup and recovery using File History in Windows 10 and System Restore in Windows 7.

You can also use OneDrive and SharePoint to backup and restore your files:

  

Patrick Estavillo
MMPC

Reverse engineering DUBNIUM –Stage 2 payload analysis

$
0
0

Recently, we blogged about the basic functionality and features of the DUBNIUM advanced persistent threat (APT) activity group Stage 1 binary and Adobe Flash exploit used during the December 2015 incident (Part 1, Part 2).

In this blog, we will go through the overall infection chain structure and the Stage 2 executable details. Stage 2 executables are the core of this activity groups’ operation, as it is the final payload delivered to possible targets that matches its profile.

Infection chain overview

The picture below shows the overall infection chain we analyzed.

Flow chart describing how Dubnium is installed

Figure 1: Infection chain overview

 

In most cases, the daily operation of the DUBNIUM APT depends on social engineering through spear-phishing. They are observed to mainly rely on an .LNK file that has an icon that looks like a Microsoft Word file. If the victim clicks the file thinking it’s a Microsoft Office Word file, it downloads a simple dropper that will download and execute next stage binary – which in this case, has the file name of kernelol21.exe.

The Stage 1 binary extensively checks-up on the system for the existence of security products or usual analysis tools for the reverse engineers or security analysts. It will pass the client’s IP address, hostname, MAC address, software profile information, and locale information to the download server. When the server thinks that the client matches profile for possible prospect, the next stage dropper will be downloaded.

 

Stage 0: Social Engineering vs. Exploits

In our previous blogs we described the Adobe Flash Exploit the malware recently used. In this blog we want to provide a brief overview of the social engineering method DUBNIUM uses for its daily infection operations. The activity group uses the .LNK file with an icon image of a Word document as one of its social engineering methods.

Shortcut icon disguised as Word document

Figure 2: Shortcut icon disguised as Word document

 

The shortcut contains commands to download and execute the next level executable or script. Unsuspecting victims will double click this icon and will be unknowingly launching a PowerShell command.

The commands in the shortcut

Figure 3: The commands in the shortcut

 

For example, the following shows the script that downloads a binary and executes it on the target system using PowerShell.

PowerShell script for downloading and execution of next stage binary

Figure 4: PowerShell script for downloading and execution of next stage binary

 

To make the attack more benign, the dropper drops an Office Word document and displays it on the screen. One of the samples we saw had content similar to the following screenshot:

Fake document contents - North Korean style language and mentions on North Korean leaders with New year’s celebration

Figure 5: Fake document contents – North Korean style language and mentions on North Korean leaders with New year’s celebration

 

Stage 2 infection process

Acquiring a Stage 2 binary is very difficult for the analysts because the download server is very selective upon the infection targets. The main direction of the infection strategy is not to infect as many as it can, instead it focuses on infecting targets that matches the desired profile, and avoids detection from security products. One very interesting fact is that the command and control (C2) server we have been observing didn’t go down for months. Overall security product coverage on Stage 2 executables is very poor, and so the strategy with this activity group (with a very selective Stage 2 infection) appears to have been effective.

The following diagram shows the transition from Stage 1 to Stage 2 through the downloaded binary.

Stage 1 to 2 transition

Figure 6: Stage 1 to 2 transition

 

The dropped binary (Dropper PE module) is never written to disk and directly injected to a new process created. In this case plasrv.exe is used, but the process name can actually vary each time. The dropper PE module will drop kbkernelolUpd.dll and kernelol21.exe (which happens to have the same name as the Stage 1 binary – but different contents). The dropper PE module will look for usual system processes, for example dwm.exe in this case, and will inject kbkernelolUpd.dll.

This is the main C2 client that will communicate with the C2 server and process downloaded commands. It performs the extra work of creating a process of usual Windows binary under systems folder and injecting the kernelol21.exe binary into it. This is a process persistency module, which will re-inject kbkernelolUpd.dll if the process is killed for some reason. The kbkernelolUpd.dll module also constantly monitors the existence of the kernelol21.exe injected process and will re-launch and re-inject the module if the infected host process is killed. This makes a process persistency loop.

The following screenshot shows the typical process tree when the Stage 2 infection happens. The dwm.exe and cipher.exe processes are infected with kbkernelolUpd.dll and kernelol21.exe.

Typical process list with Stage 2 infection

Figure 7 Typical process list with Stage 2 infection

 

The persistency of whole infection is carried by the Windows logon key shown in the following picture.

kernelol21.exe load key

Figure 8 kernelol21.exe load key

 

The following table shows the infection targets used for each stage. All infection target process files are default Windows executables under the system32 folder.

ComponentsInjection targetsDescription
Stage 1 dropper PE module
  • plasrv.exe
  • wksprt.exe
  • raserver.exe
  • mshta.exe
  • taskhost.exe
  • dwm.exe
  • sdiagnhost.exe
  • winrshost.exe
  • wsmprovhost.exe
Creates new process
Stage 2 kbkernelolUpd.dll
  • dwm.exe
  • wuauclt.exe
  • ctfmon.exe
  • wscntfy.exe
Injects into existing process

If the process is killed, svchost.exe will be created by stage kernelol21.exe.

Stage 2 kernelol21.exe
  • cipher.exe
  • gpupdate.exe
  • services.exe
  • sppsvc.exe
  • winrshost.exe
Creates new process

Table 1: DUBNIUM infection targets

 

Process image replacement technique

When the main C2 client module, kbkernelolUpd.dll, is injected, it uses LoadLibrary call that is initiated through CreateRemoteThread API. This is a very typical technique used by many malware.

Injected LoadLibrary code

Figure 9: Injected LoadLibrary code

 

But, for dropper PE module in Stage 1 and kernelol21.exe injection in Stage 2, it uses a process image replacement technique. It creates the usual Windows process, injects the PE module to this process, fabricates PEB information and modifies startup code to achieve process injection.

 

Writing PE Image

The technique starts with creating a process from the executable under Windows system folder. Table 1 shows each target processes the injection will be made into, depending on the stage and the binary. The process is created as suspended and modifications will be performed on the image. The first step is injecting the infection PE image upon the process. It uses WriteProcessMemory APIs.

Figure 10 shows the code that injects the PE header, and Figure 11 shows the memory of the target process where the PE header is injected.

Injecting PE header

Figure 10: Injecting PE header

 

PE header written on target process

Figure 11 PE header written on target process

 

After the injection of PE header, it will go through each section of the source PE image and inject them one by one to the target process memory space.

PE section injection

Figure 12: PE section injection

 

The injected PE module has dependencies on the hardcoded base and section addresses. If VirtualAlloc function upon the desired base or section addresses fails, the whole injection process will fail.

 

Acquiring context and PEB information

The next step of infection is using GetThreadContext API to retrieve current context of the target process.

GetThreadContext

Figure 13: GetThreadContext

 

One of the thread contexts retrieved is shown in the following image.

Retrieved Context

Figure 14: Retrieved Context

 

When the process is started as suspended, the ebx register is initialized with the pointer to PEB structure. The following shows the original PEB data from the target process. The ImageBaseAddress member is at offset of +8 and the value is 0x00e0000 in this case. This is the image base of the main module of the target process.

Original PEB structure

Figure 15: Original PEB structure

 

After retrieving the PEB.ImageBaseAddress from the target process (Figure 16), it will replace it with the base address of the injected module (Figure 17).

Reading PEB.ImageBaseAddress

Figure 16: Reading PEB.ImageBaseAddress

Overwriting PEB.ImageBaseAddress

Figure 17: Overwriting PEB.ImageBaseAddress

 

The PEB.ImageBaseAddress of the target process is replaced, as in the following figure, to point to the base address of the injected PE module.

Overwritten PEB.ImageBaseAddress

Figure 18: Overwritten PEB.ImageBaseAddress

 

Overwriting wmainCRTStartup

 

After overwriting PEB.ImageBaseAddress to an injected module’s base address, the next step is patching wmainCRTStartup code from the original main module.

wmainCRTStartup patch code

Figure 19: wmainCRTStartup patch code

 

The following code shows original disassembly from wmainCRTStartup code.

Original code

Figure 20: Original code

 

After patch, it will just jump to the entry code of the injected module located at address of 0x4053d0, which is the entry point of the injected module. When ResumeThread is called upon this thread, it will start the main module from the injected module’s entry code.

Patched code

Figure 21: Patched code

 

Main C2 Client (kbkernelolUpd.dll)

As kbkernelolUpd.dll is the main module of the infection chain, we are going to focus on the analysis of this binary. As we stated before, the detection coverage and information on this specific component is limited in the security industry.

 

The string for the C2 server hostname and URI is encoded in a configuration block inside the binary.

C2 server string decoding

Figure 22: C2 server string decoding

 

From the following disassembly list, get_command uses wininet.dll APIs to send basic client information and to retrieve commands from the server. The process_command is the routine that will parse message and execute designated commands.

C2 command fetch & execution loop

Figure 23: C2 command fetch & execution loop

 

Between each contact to the C2 server, there is a timeout. The timeout value is saved inside the encoded configuration block in the binary. For example, the sample we worked on had a 30-minute time out between each contact request to the server.

Sleep interval between C2 accesses

Figure 24: Sleep interval between C2 accesses

 

Cryptographic C2 channel and message format

The following diagram shows the basic message format of the C2 server payload that is downloaded when the client contacts the server.

Decrypting C2 message

Figure 25: Decrypting C2 message

 

The message from the C2 server can be encoded in various ways. The first byte in the payload is the XOR key that is used to decode following bytes. The encryption type byte indicates what encryption algorithm is used in the code. It has three different encryption schemes (0x50, 0x58, 0x70) supported.

From our static analysis, 0x58 is for AES 256 encryption algorithm, 0x70 and 0x50 are for 3DES 168 algorithm. If this type is 0x40, no encryption will be used and it looks like 0x50 and 0x58 encryption type is not fully implemented yet. So 0x70 encryption type with 3DES 168 algorithm is the only encryption type that is fully working here.

The decryption scheme is using an embedded RSA private key with the decryption key embedded in the binary. By calling CryptHashData upon the embedded password string and using CryptDeriveKey, it will acquire a key to decrypt the encrypted RSA private key. (Figure 26)

Setting encryption key

Figure 26: Setting encryption key

 

This decryption key is used to import 0x258 bytes of private key embedded inside the binary. And this private key is used to decrypt the encrypted key (Key data 02 from Figure 25) passed through the response packet from the C2 server. Next, the IV (Initialization Vector) passed through the response packet is set as a parameter to the key object.

Importing keys and IV

Figure 27: Importing keys and IV

 

Finally, the actual decryption of the payload happens through CryptDecrypt API call. The question still remains why the C2 server and the client are using such an overcomplicated encryption scheme.

Decrypting message

Figure 28: Decrypting message

 

Command processor

The C2 command processor looks very typical. It has a simple packet parser for TLV (type, length, value) data structure. The following picture shows the main routine that processes packet length and types. It will call relevant functions for each packet type.

Main command processor function

Figure 29: Main command processor function

 

Each command provides usual functionalities that are typically seen in backdoors. They include registry, file system manipulations, and searching files with specific patterns, and retrieving and transferring them back to the server and gathering network status information.

Infections statistics

The following chart shows the relative prevalence of the threat overall. We included Stage 1 and Stage 2 payload detections in this map.

Bar chart showing countries with most infections in China and Japan

Figure 30: Infection distribution by countries

 

Most of the infections we saw focused on East Asia—mostly China and Japan. We already described that the Stage 1 dropper collects and sends IP and language locale of the machines it infected to the Stage 2 dropper distribution site. We think this distribution site has a logic to determine whether to drop next payload or not.

The Stage 1 dropper is also known to collect information on culture-specific software like messengers and security software mainly used in mainland China. If the distribution site doesn’t push back Stage 2 payload, Stage 1 payload doesn’t have any means of persistency at all. This means that with all the cost of infiltrating into the machine, the malware simply gives up the machine if the machine doesn’t fit into its profile. Based upon the actual infection map and the behavior of this Stage 1 dropper, it might be a good indication that the activity group has a good geolocation preference with their targets.

 

Conclusion

DUBNIUM is a very cautious actor. From the vendor detections for Stage 2 binaries, we can see that there are no serious detections upon them in the industry. This is partially due to the strategy that DUBNIUM employs. It doesn’t try to infect as many machines as possible, instead it will potentially expose important components, like C2 client modules, to unintended targets. The very long lifespan of the domain it controls and uses for C2 operation supports the story.

Other features with DUBNIUM is that it uses encoding and encryption schemes over the executables and network protocols. Each stage has different styles of encoding and decoding schemes. Some are complicated and some are relatively simple. Stage 1 binaries have a stronger obfuscation and payload encoding scheme than Stage 2 binaries. The C2 server payload has its own format with encrypted message support.

The other feature with DUBNIUM is that over each stages, it always checks the running environment. It focuses on security products and analyst tools on Stage 1, but it is very cautious on debugging tools on Stage 2 binaries. From Stage 1, it also collects extensive information on the client system including locale, IP and MAC address and they are sent to the Stage 2 distribution site. The distribution site also serves each client once based upon this information. Getting served on the next stage binary is sometimes very challenging as we don’t know the backend algorithm behind to determine whether to serve the next stage binary or not.

 

Appendix – Indicators of Compromise

 

Stage 0

Adobe Flash Player Exploit

3eda34ed9b5781682bcf7d4ce644a5ee59818e15 SWF File

 

LNK

25897d6f5c15738203f96ae367d5bf0cefa16f53

624ac24611ef4f6436fcc4db37a4ceadd421d911

 

Droppers

09b022ef88b825041b67da9c9a2588e962817f6d

35847c56e3068a98cff85088005ba1a611b6261f

7f9ecfc95462b5e01e233b64dcedbcf944e97fca

aee8d6f39e4286506cee0c849ede01d6f42110cc

b42ca359fe942456de14283fd2e199113c8789e6

cad21e4ae48f2f1ba91faa9f875816f83737bcaf

ebccb1e12c88d838db15957366cee93c079b5a8e

4627cff4cd90dc47df5c4d53480101bdc1d46720

 

Fake documents displayed from droppers

24eedf7db025173ef8edc62d50ef940914d5eb8a

7dd3e0733125a124b61f492e950b28d0e34723d2

24eedf7db025173ef8edc62d50ef940914d5eb8a

afca20afba5b3cb2798be02324edacb126d15442

 

Stage 1

Droppers

0ac65c60ad6f23b2b2f208e5ab8be0372371e4b3

1949a9753df57eec586aeb6b4763f92c0ca6a895

4627cff4cd90dc47df5c4d53480101bdc1d46720

561db51eba971ab4afe0a811361e7a678b8f8129

6e74da35695e7838456f3f719d6eb283d4198735

8ff7f64356f7577623bf424f601c7fa0f720e5fb

b8064052f7fed9120dda67ad71dbaf2ac7778f08

dc3ab3f6af87405d889b6af2557c835d7b7ed588

 

Stage 2

Dropper

2d14f5057a251272a7586afafe2e1e761ed8e6c0

3d3b60549191c4205c35d3a9656377b82378a047

 

kernelol21.exe

6ce89ae2f1272e62868440cde00280f055a3a638

 

kbkernelolUpd.dll

b8ea4b531e120730c26f4720f12ea7e062781012

0ea2ba966953e94034a9d4609da29fcf11adf2d5

926ca36a62d0b520c54b6c3ea7b97eb1c2d203a9

db56f474673233f9b62bef5dbce1be1c74f78625

 

UserData

147cb0d32f406687b0a9d6b1829fb45414ce0cba

 

Acknowledgement: Special thanks to Mathieu Letourneau at MMPC for providing statistical coverage data on the DUBNIUM multi-stage samples and providing insight on the interpretation of the data. Special thanks to HeungSoo David Kang for providing screenshots from the fake Office Word document file.

 

Jeong Wook Oh
MMPC

 

Troubleshooting Device Enrollment with the Hybrid Diagnostic tool

$
0
0

Author: Raghu Kethineni, Senior Program Manager, Enterprise Client and Mobility

We are excited to announce the release of the System Center Configuration Manager Hybrid Diagnostic tool. If a technical issue is preventing one of your users from enrolling a device, run the Hybrid Diagnostic tool as your first step in troubleshooting. The Hybrid Diagnostic tool’s automated checks reduce investigation time and the provided guidance on the common configuration errors will help you to quickly resolve issues and get your user’s mobile device successfully enrolled.

Troubleshooting with the Hybrid Diagnostic tool takes just 3 simple steps.

  1. Run the tool on the computer that hosts the service connection point and specify the device type and UPN.
  2. The Hybrid Diagnostic Tool will run the following automated checks:
    • Checks that the SMS Executive service is running
    • Checks for the service connection point certificate
    • Checks for potential conflicts between service connection point certificates
    • Checks for DNS CName entry for the specified UPN
    • Checks for device type enablement in Configuration Manager
    • Checks for known errors in Status Messages
    • Checks for UPN synchronization in AAD
    • Checks that the specified user is a member of the cloud user collection
    • Checks that the AAD ID and cloud user ID match
    • Checks for user exceeding device cap
    • Check for multiple valid certificates present on service connection point
  3. If a check fails, choose the More Info link to see more information about resolving the issue.
    Hybrid Diagnostics mobile device troubleshooting summary

It’s that easy.

Try: System Center Configuration Manager Hybrid Diagnostic tool

Learn More:

We are always interested in hearing your feedback. Please provide feedback and suggestions using the Configuration Manager UserVoice site.

-Raghu Kethineni


Additional resources:

#AzureAD: Certificate based authentication for iOS and Android now in preview!

$
0
0

Howdy folks,

It seems like hardly a week goes by these days without a new story of leaked credentials, malware and phishing hitting the news. These stories make it super clear that in many ways, passwords are one of the most vulnerable parts of many security regimes. In Microsoft’s Identity Division, we are doing a ton of work to give our customers options to both beef up their password security (by making passwords harder to crack and adding MFA). But our bigger goal is to eliminate the need for passwords all together. If you are a Windows 10 user and have used Windows Hello, you’ve already experienced one of our big investments in this effort to eliminate passwords through the use of biometrics.

Today’s news is another big part of that effort.

Today we’re announcing public preview for certificate based authentication for iOS and Android for Office 365. This is our first public preview of this offer, but the solution is already relatively mature and some of our largest enterprise customers (many who are “smartcard only”) are already using it to enhance the security of users accessing company resources from mobile devices.

This preview lights up 2 key scenarios:

  1. In federated Azure AD domains, Office applications on iOS and Android can perform certificate-based authentication against the federation server. The chart below outlines the support for certificate based authentication across Office applications:

    iOS

    Android

    Office clients (Word, PowerPoint, Excel, OneNote)

    Supported

    Supported

    OneDrive

    Supported

    Supported

    Outlook

    Coming soon

    Supported

    Skype for Business

    Coming soon

    Supported

  2. Supported Exchange ActiveSync mobile apps in iOS and Android can now do certificate-based authentication to Exchange Online, for both managed and federated Azure AD domains.

How to get started?

Requirements

First things first, let’s quick go over the key requirements.

General requirements:

  1. You must have one or more certificate authority(s) that issue user certificates for authentication.
  2. Each certificate authority must have a certificate revocation list (CRL) that can be referenced via an internet facing URL.
  3. User certificates must be provisioned on the mobile devices. Many people do this via Mobile Device Management (MDM) software.

For Office application support,

  1. Your Azure AD domain must be federated, and the federation provider (e.g. Active Directory sFederation Services) must be configured to perform certificate based user authentication.
  2. iOS version >= 9.0 and Android version >= Lollipop are required.
  3. On iOS, the Azure Authenticator app must be installed from the App Store.

For Exchange ActiveSync support,

  1. The RFC822 attribute in user’s certificates must match the user’s routable email address in Exchange Online. If the RFC822 attribute is not present, the UPN attribute of the certificate must match the UPN of the user in Azure AD. This is required to map the certificate to a user in Azure AD.

Configuration

This section assumes that you already have the federation server configured for certificate based authentication.

To setup these new capabilities in your environment, follow the steps below:

  1. Configure your certificate authorities in Azure AD: To leverage certificate authentication, Azure AD needs to know about your certificate authorities so it can validate user certificates and perform revocation. To do so, first install the Azure AD preview powershell module. Once connected to your tenant, run the following commands to add a new certificate authority:

    $cert=Get-Content -Encoding byte “[LOCATION OF THE CER FILE FOR THE CERTIFICATE AUTHORITY]”

    $new_ca=New-Object -TypeName Microsoft.Open.AzureAD.Model.CertificateAuthorityInformation

    $new_ca.AuthorityType=0

    $new_ca.TrustedCertificate=$cert

    $new_ca. crlDistributionPoint = “[URL FOR THE CERTIFICATE REVOCATION LIST]”

    New-AzureADTrustedCertificateAuthority -CertificateAuthorityInformation $new_ca

    To verify that the certificate authority is added correctly, run the Get-AzureADTrustedCertificateAuthorities command.

  2. Configure your federation server to send the serial number and issuer claims: For Azure AD to validate certificates and perform revocation in the federated environment, information about the user certificate used for authentication must be present in the token returned from the federation server. The following claims need to be present in the token for Azure AD to perform revocation:
    1. http://schemas.microsoft.com/ws/2008/06/identity/claims/

      (The serial number of the client certificate)

    2. http://schemas.microsoft.com/2012/12/certificatecontext/field/

      (The string for the issuer of the client certificate)

Validation

To test certificate based authentication withss an Office application, follow the steps below:

  1. On your test device, install the OneDrive app from the App Store or Google Play Store.
  2. Verify that the user certificate has been provisioned to the test device. iOS and Android have facilities for viewing installed certs in their respective settings apps.
  3. Verify the Azure Authenticator app is installed on the test device if it is an iOS device. This step is not required on Android.
  4. Launch OneDrive.
  5. Enter your user name, and then pick the user certificate you want to use to sign in.

You should be successfully signed in!

Want to test certificate based authentication with Exchange ActiveSync clients? Follow the steps here.

And as always, we’d love to get any feedback suggestions you have.

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division


Endpoint Protection Updates Configuration Manager

$
0
0

Hi everyone, my name is Nicholas Jones, Premier Field Engineer with Microsoft, specializing in System Center Configuration Manager. For my first blog, I want to introduce you to updating System Center Endpoint Protection (SCEP) definition updates. Huge thanks to my colleague Jeramy Skidmore, Sr. Escalation Engineer, for helping me with this blog.

If your company has deployed or is planning to deploy SCEP, you will certainly have to plan to deploy definition updates.

In my observations, the most common solution that administrators use is to create an ADR (see below) and let it run on a schedule:

This will certainly get the updates deployed, but there is more to consider.

Make Updates Available Outside of Configuration Manager

What happens if the CM Software Update Agent fails to install definitions? What happens if the end user forces an update by pressing the update button in the SCEP user interface? In these situations, we’ll need to better understand the setting for definition update sources in the Antimalware Policy. If you’re not familiar with this, navigate to Assets and Compliance, Endpoint Protection, Antimalware Policies. You could have quite a few Antimalware policies, but I’ll be working with the default policy in my screenshots today.

At this point, those who are familiar with these settings may be ready to skip ahead. Please hang with me.

What do these settings actually do?

You’ve got a few options here, so let’s discuss what they actually do.

When the SCEP client definitions become too far out of date, or if the end user clicks Update in the UI, the SCEP client looks for a FallBackOrder registry key in HKLM\Software\Policies\Microsoft\Microsoft Antimalware\Signature Updates . The SCEP client will check each update source in order until it locates a source that has available definitions. If none of the sources have definitions available, the SCEP client will return an error.

Updates distributed from Configuration Manager

Selecting this option sets a registry value called AuGracePeriod in HKLM\Software\Policies\Microsoft\Microsoft Antimalware\Signature Updates . By default, this is set to 4,320 minutes, or 72 hours. You can modify this value in your Antimalware Policy. This value represents (in minutes) the amount of time the SCEP client will ‘sleep’ and wait for CM to bestow signatures upon it. When this period expires, it will attempt to pull definitions from the order defined by policy and stored in the Fallback registry key. Believe it or not, SCEP cannot use CM as an update source location for definitions, which is why this setting does not modify the FallBackOrder registry key.

Updates from UNC file shares

If we select this option, we must also define the UNC paths in the definition updates section of the antimalware policy. This can be seen a few screenshots above. This option modifies both the FallbackOrder key and the DefinitionUpdateFileShareSources key. Multiple UNC paths can be specified, as seen below. This can leverage existing DFS infrastructure if it exists. A few drawbacks of this option are that the UNC file share is not populated automatically and it does not take advantage of binary delta differentials. Also, if out of date definitions are left on the UNC share, it can cause the clients to fail checking any further sources in the fallback list.

Updates distributed from Microsoft Update

This one sounds fairly obvious. It is useful for clients that are off of your network for a while, unless you are set up to manage internet based clients or are using DirectAccess. Of the two Microsoft hosted fallback locations, this is ideal as it results in the smallest payload delivered to the client.

Updates distributed from Microsoft Malware Protection Center

MMPC should always be last in your source list, as the payload from this location will be much larger.


Updates distributed from WSUS

Configuration Manager admins generally stay out of the WSUS console, except to periodically perform a WSUS cleanup or other maintenance. While it’s true that WSUS is mostly controlled by Configuration Manager, it will still function happily as a standalone WSUS instance for the purposes of making SCEP definition updates available. If you have WSUS listed as an update source, you should plan to create an Automatic Approval rule for SCEP definitions. It will look something like this:

I do hope this post helps you better understand the flow of SCEP definition updates. Please post any comments or questions and I’ll respond when I can.

Announcing: New Transport Advancements in the Anniversary Update for Windows 10 and Windows Server 2016

$
0
0

TCP based communication is used ubiquitously in devices from IoT to cloud servers. Performance improvements in TCP benefit almost every networking workload. The Data Transports and Security (DTS) team in Windows and Devices Group is committed to making Windows TCP best in class. This document will describe the first wave of features in the pipeline of upcoming Windows Redstone releases.

Windows is introducing new TCP features in the Anniversary Update for Windows 10 and Windows Server 2016 releasing summer 2016. In this document we will describe five key features designed to reduce latency, improve loss resiliency and to promote better network citizenship. The goals when starting out were to decrease TCP connection setup time, increase TCP startup speed and to decrease time to recover from packet loss.

Here is a summary of the feature list:

  1. TCP Fast Open (TFO) for zero RTT TCP connection setup. IETF RFC 7413 [1]
  2. Initial Congestion Window 10 (ICW10) by default for faster TCP slow start [5]
  3. TCP Recent ACKnowledgment (RACK) for better loss recovery (experimental IETF draft) [4]
  4. Tail Loss Probe (TLP) for better Retransmit TimeOut response (experimental IETF draft) [3]
  5. TCP LEDBAT for background connections IETF RFC 6817 [2]

 

TCP Fast Open: TCP Fast Open (TFO) accomplishes zero RTT connection setup time by generating a TFO cookie during the first three-way handshake (3WH) connection setup. Subsequent connections to the same server can use the TFO cookie to connect in zero-RTT. TFO connection setup really just means that TCP can carry data in the SYN and SYN-ACK. This data can be consumed by the receiving host during the initial connection handshake. TFO is one full Round Trip Time (RTT) faster than the standard TCP setup which requires a three way-handshake. This leads to latency savings and is very relevant to short web transfers over the Internet where the average latency is on the order of 40 msec.

Transport Layer Security (TLS) over TCP using Fast Open is typically two Round Trip Times faster than a standard TLS over TCP connection setup because a client_hello can be included in the SYN packet saving an additional RTT in the TLS handshake. This savings can add up to a substantial increase in resource efficiency while using busy servers that deliver many small Internet objects to the same clients (standard web page, mobile APP data, etc.) TLS 1.3 is an ongoing effort at the IETF and it will help us achieve zero-RTT connection setup for HTTP workloads in a subsequent release.

Because we are changing the 3WH behavior of TCP there are several issues that we must address and mitigate. Windows recommends that TLS be used over TCP when employing TCP Fast Open to remove the chance that a man in the middle could manipulate TFO cookies for use in amplified DDOS attacks. TLS connections are immune to attacks from behind Shared Public IPs (NATs), however, it is still possible for a compromised host to flood spoofed SYN packets with valid cookies. To address the problem of compromised hosts Windows TFO sets a dynamically adjusted maximum limit on the number of pending TFO connection requests preventing resource exhaustion attacks from compromised hosts running malicious code. Finally, it is possible for the SYN packet to be duplicated in the network. TLS precludes such duplicate delivery but other services need to ensure that TFO is used for idempotent requests. Windows TFO is safe when used as recommended (with TLS) and can provide a substantial increase in resource efficiency.

The Anniversary Update for Windows 10 will ship with a fully compliant client side implementation enabled by default. The Microsoft Edge browser will ship with a About:Flags setting for TCP Fast Open which will be disabled by default. The eventual goal is to have it enabled by default in IE and Edge browsers in a subsequent release. In a subsequent release we plan to support early accept and to fully integrate the server side implementation with http.sys/IIS. The server side implementation will be disabled by default.

Configuration: In the Edge browser, navigate to “about:flags” or “about:config” and use checkbox for “Enable TCP Fast Open”, Netsh int tcp set global fastopen=

Action Items: If you operate infrastructure or own software components like middleboxes or packet processing engines that make use of a TCP state machine, please begin looking into supporting RFC 7413. By next year the combination of TLS 1.3 and TFO is expected to be more widespread.  Read more at: Building a faster and more secure web with TCP Fast Open, TLS False Start, and TLS 1.3

 

Initial Congestion Window (IW10): The Initial Congestion Window (IW or ICW) default value in Windows 10 and Server 2012 R2 is 4 MSS. With the new releases the default value will be 10 MSS. IW10 default improves slow start speed over the previous default value of IW4. This change in Windows TCP’s startup behavior designed to keep pace with the increased emission rates of network routing equipment used on the Internet today. The ICW determines the limit on how much data can be sent in the first RTT. Like Windows TFO, IW10 mostly affects small object transfers over the Internet. Windows IW10 can transfer small Internet objects up to twice as quickly as ICW4.

There are some concerns around burst losses with switches and routers that have shallow buffers. We have telemetered such episodes to help us improve the reliability in subsequent releases. In RS2, we plan to flight IW 4, IW 10 and IW 16 to have a better performance comparison across device types.

Configuration: This is currently configured through templates (netsh) or set-nettcpsetting (Powershell). On client SKU the only options to change the IW are to switch to the compat template (IW = 4) or to use the SIO_TCP_SET_ICW option, which also restricts the values in range (2, 4, 10). On server SKU IW can be configured up to a maximum of 64.

Action Items: Please notify us if you see increased loss rates or timeouts with RS1 clients and servers.

 

Tail Loss Probe (TLP): Tail Loss Probe is intended to improve Windows TCP’s behavior when recovering from packet loss. TLP improves TCP recovery behavior by converting Retransmit TimeOuts (RTOs) into Fast Retransmits for much faster recovery.

TLP transmits one packet in two round-trips when a connection has outstanding data and is not receiving any ACKs. The transmitted packet (the loss probe), can be either new or a retransmission. When there is tail loss, the ACK from a loss probe triggers SACK/FACK based fast recovery, thus avoiding a costly retransmission timeout (which is bad from the point of view of the long duration as well as the reduction of the congestion window and repeat of slow start).

TLP is enabled only for connections that have an RTT of at least 10 msec in both RS1 and Server 2016. This is to avoid spurious retransmissions for low latency connections. The most beneficial scenario for TLP is short web transfers over WAN.

Configuration: The TCP templates have the additional setting called “taillossprobe”. On client SKU switching to compat template turns TLP off. On both client and server SKUs, the Internet template has it enabled by default. The InternetCustom and DatacenterCustom templates can be used for more fine grained control for specific connections.

 

Recent ACKnowledgement (RACK): RACK uses the notion of time instead of counting duplicate ACKnowledgements to detect missing packets for TCP Fast Recovery. RACK provides improved loss detection over standard TCP Fast Recovery techniques.

RACK is based on the notion of time, instead of traditional approaches for packet loss detection such as packet or sequence number checks. Packets are deemed lost if a packet that was sent “sufficiently later” has been cumulatively or selectively acknowledged. The TCP sender records packet transmission times and infers losses using cumulative or selective acknowledgements.

RACK is enabled only for connections that have an RTT of at least 10 msec in both RS1 and Server 2016. This is to avoid spurious retransmissions for low latency connections. RACK is also only enabled for connections that successfully negotiate SACK.

Configuration: The TCP templates have the additional setting called “rack”. On client SKU switching to compat template turns RACK off. On both client and server SKUs, the Internet template has it enabled by default. The InternetCustom and DatacenterCustom templates can be used for more fine grained control for specific connections.

 

Windows Low Extra Delay BAckground Transport (LEDBAT): The fifth feature is in response to a large number of customer requests for a background transport that does not interfere with other TCP connections. In response to these requests we used Windows TCP modular congestion control structure and added a new Congestion Control Module called LEDBAT in order to manage background flows.

Windows LEDBAT is implemented as an experimental Windows TCP Congestion Control Module (CCM). Windows LEDBAT transfers data in the background and does not interfere with other TCP connections. LEDBAT does this by only consuming unused bandwidth. When LEDBAT detects increased latency that indicates other TCP connections are consuming bandwidth it reduces its own consumption to prevent interference. When the latency decreases again LEDBAT ramps up and consumes the unused bandwidth.

Configuration: LEDBAT is only exposed through an undocumented socket option at the moment. Please contact us if you would like to enable experimentation for a background workload.

Introducing #AzureAD Connect Health for Windows Server AD

$
0
0

Howdy folks,

We’ve just turned on the preview of Azure AD Connect Health for Windows Server AD. This new feature of Azure AD Premium gives IT admins the ability to monitor the health and performance of their on-premises Windows Server Domain Controllers from the cloud. This new capability has been a HUGE hit with our private preview customers and we’re hoping you’ll be excited as well.

I’ve asked Arturo Lucatero, one of the Program Managers on the Azure AD Connect Health R&D team, to write a quick blog post on this cool new feature. You’ll find his blog below.

Hopefully you will find this new capability useful! And as always, we would love to receive any feedback or suggestions you have.

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

——————————–

Hello World,

I’m Arturo Lucatero, a Program Manager on the Azure AD Connect Health team. Today, I’m pleased to announce the next addition to Azure AD Connect Health, which is monitoring for Active Directory Domain Services (AD DS.) While Azure AD Connect Health has the ability to monitor ADFS and Azure AD Connect (Sync), we knew that Active Directory Domain Services is a critical component and we wanted to make sure we gave you the same, easy, low-cost and insightful monitoring experience. Starting with the quick and simple onboarding process, Azure AD Connect Health for AD DS is here to improve your monitoring experience!

Active Directory Domain Services was first introduced back in 1999 and is now the cornerstone for identity needs of most business organizations. Enabling a monitoring solution for Active Directory Domain Services is critical to a company’s reliable access to applications. Introducing the ability to monitor your AD DS infrastructure from the cloud, opens many possibilities that weren’t previously available with traditional box monitoring solutions. Let’s take a look!

The preview release of Azure AD Connect Health for AD DS has the following capabilities:

  • Monitoring alerts to detect when domain controllers are unhealthy, along with email notifications for critical alerts.
  • Domain Controllers dashboard which provides a quick view into the health and operational status of your domain controllers.
  • Replication Status dashboard with latest replication information, along with links to troubleshooting guides when errors are detected.
  • Quick anywhere access to performance data graphs of popular performance counters, necessary for troubleshooting and monitoring purposes.
  • RBAC controls to delegate and restrict access to the users managing AD DS.

Installation is extremely simple. All you have to do is install the agent (links available in our documentation as well as in the Connect Health portal) on your domain controllers. This process takes less than 5 minutes! We also provide a scriptable deployment option to automate this in larger environments.

Alerts

The Azure AD Connect Health for AD DS alerts, are intended to inform you when something is wrong in your environment. Whether a domain controller is unable to replicate successfully, not able to find a PDC, is not properly advertising or amongst many other issues, you can count on these alerts to inform you. Additionally, if you enable email notifications, you will receive these alerts straight to your inbox.

We are constantly striving to enhance our alerts, and your feedback is very important to us. You can share your thoughts about a particular alert, by clicking on the feedback command within the alert blade.

Domain Controllers Dashboard

This dashboard provides a unified lens into the health and operational status of your AD DS environment. We interviewed a number of domain admins and one of the challenges for them was the ability to have a quick glance view of their environment to detect hotspots. By presenting a topological view along with health status and key operational metrics of monitored DCs, this dashboard makes it quick and easy to identify any DCs that might require further investigation.

Knowing whether your DCs are advertising, are able to reach a Global Catalog or when was the last time they were rebooted, are a few of the metrics that you can add to your dashboard, by selecting them from the columns blade. By default, DCs are grouped by their corresponding domain; however, a single click will group them by their corresponding site. This is super helpful when trying to understand the topological composition of your environment. Lastly, if you have a large environment, you can use the find box to quickly filter out DCs.

Replication Status Dashboard

Replication is one of the most critical processes that ensures that your environment is running smoothly. This dashboard provides a view of the Replication topology along with the latest replication attempt status, for your monitored DCs. If one or more of your DCs encountered an error during the latest replication, you will find helpful details and documentation links to assist with the remediation process.

To help drive error visibility to the admins, we auto expand any domain controllers with replication errors to ensure that you can quickly focus on those that might require your attention.

Monitoring

The monitoring feature provides the ability to compare the performance of your monitored DCs against each other, as well as comparing different metrics of interest. Knowing these data points can be a critical item, when troubleshooting AD DS. Whether you are interested in knowing how your DCs are handling Kerberos Authentications per sec or knowing the Replication queue size, you can easily find these data points. This allows you to access to the performance data of your environment, completely from the cloud from anywhere in the world.

As part of our first round, we have included 13 of the most popular performance metrics, such as LDAP bind time, LDAP searches per sec, NTLM authentications per sec, amongst others. You can use the “Filter” command to add them to your blade giving you a single location where you can compare different metrics on the same view. Clicking on a chart will allow you to drill into a specific performance metric with additional controls on time and tabular view of the data that shows peaks and averages.

We are constantly adding new items to the list. If there is a particular performance metric you would find helpful to be included, please let us know!

Video

The video below provides an overview of how to get starting using Azure AD Connect Health for AD DS, as well as a walkthrough of the features we’ve discussed.

https://channel9.msdn.com/Series/Azure-Active-Directory-Videos-Demos/Azure-AD-Connect-Health-monitors-on-premises-AD-Domain-Services

What’s coming next?

  • Additional alerts based on customer feedback and data from our support channel
  • Additional performance metrics that help with monitoring your AD DS environment

For additional information on how to get started monitoring your AD DS, see Azure AD Connect Health Documentation.

Your feedback is very important to us and I’d encourage you to post any comments, questions or concerns in our discussion forum or send us note at askaaadconnecthealth@microsoft.com. Additionally, feel free to comment at the bottom of this post.

Thanks for your time,

-Arturo (@ArlucaID) & The Azure AD Connect Health Team

KB: Data to gather when opening a case for Microsoft Azure Automation

$
0
0

A new Knowledge Base article has been published that describes some of the basic information you should gather before opening a case for Azure Automation with Microsoft product support. This information is not required, however it will help Microsoft resolve your problem as quickly as possible. You can find the complete article below.

3178510Data to gather when opening a case for Microsoft Azure Automation (https://support.microsoft.com/en-us/kb/3178510)


J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

Windows SBS 2011, Windows SBS 2008 and impact of MS16-072

$
0
0

[This post comes to us courtesy of Susan Bradley, Wayne Small and Schumann GE from Product Group]

On June 14, 2016 Microsoft released MS16-072  KB3159398 to fix a vulnerability in Group Policy whereby an attacker can allow elevation of privilege if an attacker launches a man-in-the-middle (MiTM) attack against the traffic passing between a domain controller and the target machine on domain-joined Windows computers.  After MS16-072 is installed, user group policies are retrieved by using the computer’s security context. This by-design behavior change protects domain joined computers from a security vulnerability.  Any Group policy that performs Security filtering on a per user basis will need to be adjusted now work after MS16-072.

For SBS 2008 and SBS 2011 in particular there are several group policies set up in the product for purposes of controlling the users’ desktop environment and Windows Software Update Services (WSUS) that are directly impacted by this change and will need adjustment in order to continue to work after the application of this patch.

There will be no automated patch to fix this issue on the SBS 2011 platform, thus we recommend that you take the following action to ensure that the default group polices on the SBS 2008 and SBS 2011 server are adjusted as well as checking if any group policies you have placed on the systems are impacted.

I would like to thank various blogs and resources that provided additional information that I am relying on in order to provide the information for the SBS community.

If you’d like to review these additional resources, I’d recommend reviewing Jeremy Moskowitz’s blog, and Darren Mar-Elia’s blog .  Additional resources include the AskDS blog, and the  JH consulting blog.  I would recommend reviewing these additional resources if you manage different Server platforms as the commands and PowerShell scripts are slightly different for different versions of Windows Server.

Prior to MS16-072, Group policy could be set up with security filtering uniquely for computer users.  Both the SBS 2008 and SBS 2011 systems as part of the SBSMonitoring service run a routine that every 20 minutes there is a service that synchronizes the SBS created (“stamped”) users with the Security Filtering on the “Windows SBS User Policy” so that the SBS can deploy specific settings to the users desktop environment.  If you merely add the Domain computers READ right to the security filtering section in group policy (or any other manual change to security filtering), 20 minutes later you will find this right removed.  So we must add this domain computer READ right in a specific way.

I’d first recommend that you review your server(s) and workstations to confirm that the patch has been deployed. Secondly, you will need to review your group policies to asses if they are impacted.  An excellent PowerShell script you can use to check your systems is from the PoSHChap blog.

To begin, log into your SBS  2011 server.  Find Windows PowerShell under Accessories/Windows PowerShell.  Right mouse click and click on Run as Administrator.

1

Now copy and paste the following script to review what group polices are impacted:

Copy below this line

===============================================================================

#Load GPO module
Import-Module GroupPolicy

#Get all GPOs in current domain
$GPOs = Get-GPO -All

#Check we have GPOs
if ($GPOs) {

#Loop through GPOs
foreach ($GPO in $GPOs) {

#Nullify $AuthUser & $DomComp
$AuthUser = $null
$DomComp = $null

#See if we have an Auth Users perm
$AuthUser = Get-GPPermissions -Guid $GPO.Id -TargetName “Authenticated Users” -TargetType Group -ErrorAction SilentlyContinue

#See if we have the ‘Domain Computers perm
$DomComp = Get-GPPermissions -Guid $GPO.Id -TargetName “Domain Computers” -TargetType Group -ErrorAction SilentlyContinue

#Alert if we don’t have an ‘Authenticated Users’ permission
if (-not $AuthUser) {

#Now check for ‘Domain Computers’ permission
if (-not $DomComp) {

Write-Host “WARNING: $($GPO.DisplayName) ($($GPO.Id)) does not have an ‘Authenticated Users’ permission or ‘Domain Computers’ permission – please investigate” -ForegroundColor Red

}   #end of if (-not $DomComp)
else {

#COMMENT OUT THE BELOW LINE TO REDUCE OUTPUT!

Write-Host “INFORMATION: $($GPO.DisplayName) ($($GPO.Id)) does not have an ‘Authenticated Users’ permission but does have a ‘Domain Computers’ permission” -ForegroundColor Yellow

}   #end of else (-not $DomComp)

}   #end of if (-not $AuthUser)
elseif (($AuthUser.Permission -ne “GpoApply”) -and ($AuthUser.Permission -ne “GpoRead”)) {

#COMMENT OUT THE BELOW LINE TO REDUCE OUTPUT!
Write-Host “INFORMATION: $($GPO.DisplayName) ($($GPO.Id)) has an ‘Authenticated Users’ permission that isn’t ‘GpoApply’ or ‘GpoRead’” -ForegroundColor Yellow

}   #end of elseif (($AuthUser.Permission -ne “GpoApply”) -or ($AuthUser.Permission -ne “GpoRead”))
else {

#COMMENT OUT THE BELOW LINE TO REDUCE OUTPUT!

Write-Output “INFORMATION: $($GPO.DisplayName) ($($GPO.Id)) has an ‘Authenticated Users’ permission”

}   #end of else (-not $AuthUser)

}   #end of foreach ($GPO in $GPOs)

}   #end of if ($GPOs)

===============================================================================

Copy above this line

Script courtesy of https://blogs.technet.microsoft.com/poshchap/2016/06/16/ms16-072-known-issue-use-powershell-to-check-gpos/

Either paste the script into your PowerShell window on the server or save it as a .ps1 script and run it.  You should see several red warnings that several of your group policies do not have the right permissions.

2

In reading various scripts online – It turns out there are different PowerShell commands for GP Permissions in 2008/2008R2 vs later versions of Windows.  So be aware the solution provided in this blog post specifically works on 2008 and 2008 R2 and does not work on 2012 and 2012 R2.  Specially the difference is simple – for 2008 and 2008 R2, replace the Get-GPPermission and Set-GPPermission commands with Get-GPPermissions and Set-GPPermissions in the script and it will work fine.

Secondly – given we have a large number of SBS sites still, I did some specific testing with it.  The results of the script means that the following policies are affected by this issue and MAY NOT APPLY if you don’t add the Authenticated Users OR Domain Computers as READ on the Delegation tab for that GPO.

  • Windows SBS User Policy
  • SharePoint PSConfig Notification Policy
  • Update Services Server Computers Policy
  • Update Services Client Computers Policy

Microsoft have indicated specific conditions for using either Authenticated Users OR Domain Computers with the READ permission.  I’ve done quite a bit of investigation and in conversation with Group Policy MVPs, have decided that I will implement this consistently using the Domain Computers group as this works for all scenarios.

Now we need to adjust the permissions so that the group policies work after the installation of MS16-072, the patch of KB3159398.

For SBS 2011 in the PowerShell window cut and paste the following script:

Copy below this line

===============================================================================

Import-Module GroupPolicy

Get-GPO -All | Set-GPPermissions -TargetType Group -TargetName “Domain computers” -PermissionLevel GpoRead

===============================================================================

Copy above this line

The first line calls the Group policy module for PowerShell, the second line adds the Domain Computers READ right to the delegation tab so that the Security filtering set up by the server can continue to process.

The script should scroll through the settings and adjust the group policies.

3

The script has done what it needs to do.   If you’d like to visually see the impact, if you go to any Group policy object you will now see Domain Computers on the delegation tab with READ rights.

4

On the Group policy object of Windows SBS User policy you should now see

5

Domain Computers with a Read right to the Group policy object.

Now run the testing script again to confirm that your group policy permissions have been adjusted.

Once again copy and paste the following script in the PowerShell window or save it as a .ps1 script:

Copy below this line

===============================================================================

#Load GPO module
Import-Module GroupPolicy

#Get all GPOs in current domain
$GPOs = Get-GPO -All

#Check we have GPOs
if ($GPOs) {

#Loop through GPOs
foreach ($GPO in $GPOs) {

#Nullify $AuthUser & $DomComp
$AuthUser = $null
$DomComp = $null

#See if we have an Auth Users perm
$AuthUser = Get-GPPermissions -Guid $GPO.Id -TargetName “Authenticated Users” -TargetType Group -ErrorAction SilentlyContinue

#See if we have the ‘Domain Computers perm
$DomComp = Get-GPPermissions -Guid $GPO.Id -TargetName “Domain Computers” -TargetType Group -ErrorAction SilentlyContinue

#Alert if we don’t have an ‘Authenticated Users’ permission
if (-not $AuthUser) {

#Now check for ‘Domain Computers’ permission
if (-not $DomComp) {

Write-Host “WARNING: $($GPO.DisplayName) ($($GPO.Id)) does not have an ‘Authenticated Users’ permission or ‘Domain Computers’ permission – please investigate” -ForegroundColor Red

}   #end of if (-not $DomComp)
else {

#COMMENT OUT THE BELOW LINE TO REDUCE OUTPUT!

Write-Host “INFORMATION: $($GPO.DisplayName) ($($GPO.Id)) does not have an ‘Authenticated Users’ permission but does have a ‘Domain Computers’ permission” -ForegroundColor Yellow

}   #end of else (-not $DomComp)

}   #end of if (-not $AuthUser)
elseif (($AuthUser.Permission -ne “GpoApply”) -and ($AuthUser.Permission -ne “GpoRead”)) {

#COMMENT OUT THE BELOW LINE TO REDUCE OUTPUT!
Write-Host “INFORMATION: $($GPO.DisplayName) ($($GPO.Id)) has an ‘Authenticated Users’ permission that isn’t ‘GpoApply’ or ‘GpoRead’” -ForegroundColor Yellow

}   #end of elseif (($AuthUser.Permission -ne “GpoApply”) -or ($AuthUser.Permission -ne “GpoRead”))
else {

#COMMENT OUT THE BELOW LINE TO REDUCE OUTPUT!

Write-Output “INFORMATION: $($GPO.DisplayName) ($($GPO.Id)) has an ‘Authenticated Users’ permission”

}   #end of else (-not $AuthUser)

}   #end of foreach ($GPO in $GPOs)

}   #end of if ($GPOs)

===============================================================================

Copy above this line

Script courtesy of https://blogs.technet.microsoft.com/poshchap/2016/06/16/ms16-072-known-issue-use-powershell-to-check-gpos/

Your resulting testing screen should not show any red warnings and instead be filled with white and yellow comments:

6

Your SBS 2011 default group polices will now function as usual.

If you’d like to make all future group polices you set up work by default with the new behavior, you can follow the advice in  the section entitled “Making the change permanent in Active Directory for future / newly born GPOs” in the Jeremy Moskowitz’s  blog.

For SBS 2008, you’ll need to manually add the READ permission right to the delegation tab as shown:

4

On the Group policy object of Windows SBS User policy you should now see

5

Viewing all 5932 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>