Quantcast
Channel: Category Name
Viewing all 5932 articles
Browse latest View live

What’s new for container identity

$
0
0

Identity is a crucial component of any application. Whether you’re authenticating users on a web app or trying to query data from a back-end server, chances are you’ll need to integrate with an identity provider. Containerized applications are no exception, which is why we’ve included support for Active Directory identities in Windows Containers from the beginning. Now that Windows Server 2019 has released, we’d like to show you what we’ve been working on the last 3 years to make Windows container identity easier and more reliable.

If you’d like to jump straight into the documentation on container identity, head on over to https://aka.ms/contianers/identity

Improved Reliability

When we launched support for containers in Windows Server 2016, we set off on an adventure to redefine how people manage their apps. One of those innovations was the use of a group managed service account (gMSA) to replace the computer identity in containers. Before containers were a thing, you would typically domain-join your computer and use its implicit identity or a service account to run the app. With containers, we wanted to avoid the complexity of domain join since it would quickly become difficult to manage short-lived computer objects in Active Directory. But we knew apps would still need to use AD identities, so we came up with a solution to assign a gMSA to the container computer account at runtime. This gave the container a similar experience to being domain joined, but let multiple containers use the same identity and avoided having to store sensitive credentials in the container image.

As more customers started using gMSA with a wide variety of applications, we identified two issues that affected the reliability of gMSA with containers:

  1. If the hostname of the container did not match the gMSA name, certain functionality like inbound NTLM authentication and ASP.NET Membership role lookups would fail. This was an easy doc fix, but led to a new problem…
  2. When multiple containers used the same hostname to talk to the same domain controller, the last container would supersede the others and terminate their connections, resulting in random authentication failures.

To address these issues, we changed how the container identifies itself on the network to ensure it uses its gMSA name for authentication regardless of its hostname and made sure multiple connections with the same identity are properly supported. All you need to do to take advantage of this new behavior is upgrade your container host and images to Windows Server 2019 or Windows 10 version 1809.

Additionally, if you were unable to use gMSA identities with Hyper-V isolated containers in Windows versions 1703, 1709, and 1803, you’ll be glad to know that we’ve fixed the underlying issue in Windows Server 2019 and Windows 10 version 1809. If you can’t upgrade to the latest version of Windows, you can also use gMSAs with Hyper-V isolation on Windows Server 2016 and Windows 10 version 1607.

Better docs and tooling

We’ve invested in improving our documentation to make it easier for you to get started using gMSAs with your Windows containers. From creating your first gMSA account, updating your Dockerfile to help your app use the gMSA, and troubleshooting tips for when things go wrong, you’ll find it all at https://aka.ms/containers/identity.

As part of the documentation upgrade, we’ve also made it easier to get the Credential Spec PowerShell module. The source code still lives on GitHub, but you can now easily download it from the PowerShell Gallery by running Install-Module CredentialSpec. There are also a few improvements under the hood, including better support for child domains and improved validation of the account information.

Kubernetes Support

Finally, we’re excited to announce that alpha support for gMSA with Windows containers is shipping with Kubernetes version 1.14! Kubernetes takes care of copying credential specs automatically to worker nodes and adds role-based access controls to limit which gMSAs can be scheduled by users. While gMSA support is not yet ready for production use, you can try it by enabling alpha features as described in the Kubernetes gMSA docs.


AskDS Is Moving!

$
0
0

Hello readers.

The AskDS blog, as with all the other TechNet and MSDN blogs, will be moving to a new home on TechCommunity.

The migration is currently in progress.

This post will be updated with the new URL when the migration is complete.

Thanks!

Let me Count the Ways: Determining Why the System Process Consumes 100% of a Single CPU Core

$
0
0

___________________________________________________________________________________________________________________________

IMPORTANT ANNOUNCEMENT FOR OUR READERS!

AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at https://aka.ms/CISTechComm (hosted at https://techcommunity.microsoft.com). Please bear with us while we are still under construction!

We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either https://aka.ms/askpfeplat as you do today, or at our new site https://aka.ms/CISTechComm. Please feel free to update your bookmarks accordingly!

Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.

If you have never visited the TechCommunity site, it can be found at https://techcommunity.microsoft.com. On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.

NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at https://aka.ms/PFETechComm!

As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!

__________________________________________________________________________________________________________________________

 

NOTE: This blog is going through a specific issue in order to help show the steps involved in troubleshooting this type of issue. The process name(s) referenced in this content, except for System, can be any process, and not just the process used as an example in this post, and in no way are there expectations, no is it implied, that this particular process will cause you any problems!

 

Hey everyone, Konstantin Chernyi here. I’m a Premier Field Engineer at Microsoft Russia and today I’m gonna tell you a real-world story that happened recently. Long story short, a customer asked me: “How do I understand why the System process is consuming 100% of a single CPU core on my machine?”

Whenever I see description or request like this, my first step is to collect ETW trace. In the past, I would send a long instruction how to install Windows Performance Toolkit
https://docs.microsoft.com/en-us/windows-hardware/test/wpt/ and how to use xperf with appropriate kernel flag to collect data, but these days, thanks to PG, I don’t need to do that anymore. Since very first release of Windows 10/Window server 2016 – WPR (Windows Performance Recorder) with a lot of predefined profiles shipped with the OS. So, all we need is – collect short trace at the very exact moment when problem exist. In this case we used CPU profile:

Wpr -start CPU

<wait 10-15 seconds, so we have enough information>

Wpr -stop C:temptrace.etl

As soon as customer provided the trace, I opened it in WPA (Windows Performance Analyzer).

The CPU is busy indeed:

Top CPU consumer – System:

The System process has multiple threads, but only one TID (#76) is very active and consuming CPU time:

With public symbols https://docs.microsoft.com/en-us/windows/desktop/dxtecharts/debugging-with-symbols we can go deeper and review function called in this thread:

ntoskrnl.exe!KeBalanceSetManager huh, time to remember what I’ve read in Windows Internals back in a days. On page 188 of the second part of Windows Internals 6th version, you can find explanation of this function:

The balance set manager (KeBalanceSetManager, priority 16). It calls an inner routine, the

working set manager (MmWorkingSetManager), once per second as well as when free

memory falls below a certain threshold. The working set manager drives the overall memory

management policies, such as working set trimming, aging, and modified page writing.

Hmm, it looks like they are facing memory related problems, but they didn’t mention it, and the initial request was about high CPU consumption by System process. Let’s look at the memory info that we have in the trace, which isn’t much since we used the CPU profile, but at least let’s give it a try.

According to info in the trace, this box has 4.9GB in Zero and Free lists, and 2.7GB in Standby Lists, which gives us 7.6GB available memory…confusing, isn’t it? It looks like this box has plenty of available memory, but system calls KeBalanceSetManager routine every second:

Also, if you take a closer look at memory utilization, you can see that the Page Pool Commit is around 11.9GB, which is a lot:

Here is are some good articles to read on this area:

https://blogs.technet.microsoft.com/markrussinovich/2009/03/10/pushing-the-limits-of-windows-paged-and-nonpaged-pool/

https://docs.microsoft.com/en-us/windows/desktop/memory/memory-pools

Total installed RAM on this box – 32GB:

It looks like we need a better look at what’s happening in memory…the best way to do that – memory dump. In this case we decide to try generating mirror memory dump via livekd.exe
https://docs.microsoft.com/en-us/sysinternals/downloads/livekd

So with that in mind, I asked the customer to grab a dump by executing livekd -ml -o C:tempm.dmp

Next I’ll use WinDbgX aka WindDbg preview https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugging-using-windbg-preview and the Mex extension https://www.microsoft.com/en-us/download/details.aspx?id=53304

Let’s start by reviewing virtual memory states:

From here we can see that the PagedPool Commit = 11.95GB, but the PagedPool usage is zero, I assume it’s because we used mirror dump. Anyway, from this data we clearly see that we definitely have a memory issue since Available pages = 2.87MB and there are a lot pool allocation failures. First let’s see first memory usage by process:

Heh, SynTPEnh.exe consumed 9.48GB of RAM, very well. Now let’s see what’s in Paged Pool:

Hmm, bunch of Token objects, let’s shed some light and dump them all:

Again SynTPEnh.exe, let’s calculate all token handles:

Keeping in mind that every handle gives us about 8 rows, so we need to divide it and get in total about 160k handles, which is a lot, and almost all of them belong to SynTPEnh.exe.

On the customer machine, the application event log was full of events like this one:

So the next step is to check if there are any “zombie processes” https://randomascii.wordpress.com/2018/02/11/zombie-processes-are-eating-your-memory/, and we can see a lot of them:

The majority of the zombie processes are in session 1, which is not in used in this case since the customer is using RDP to connect to this machine:

The customer said that on this tower, there weren’t any pointing devices except a mouse, so it was safe to uninstall it and check. After uninstallation memory consumption immediately went down, and there was nothing to do for the system process, so it goes almost Idle.

So what was our root cause?

For some reason the process SynTPEnh.exe was being created every 4 seconds, do some work for about 1 sec, and then crash. The token handle from parent services that started this process is not released, which leads us to memory leak and high CPU consumption. Here is an example from the trace that process SynTPEnh.exe come and go all the time:

Wbr, Konstantin.

 

General Availability of PowerShell Core 6.2

$
0
0

We’re proud to announce that the latest version of PowerShell has been released!

This is the third minor supported release of PowerShell Core, the open-source edition of PowerShell that works on Linux, macOS, and Windows!

Thanks to everyone that made this release possible, including our contributors, users, and anyone who filed issues and submitted feedback.

So How Do I Install It?

For info on installing PowerShell Core 6.2, check our installation docs.

A reminder that PowerShell Core works side-by-side with Windows PowerShell, so you can use both independently of each other.
This means that you can continue to use Windows PowerShell for existing scripts while simultaneously using PowerShell Core for new automation or to explore its new capabilities.

What’s New?

The PowerShell Core 6.2 release is focused primarily on performance improvements, bug fixes, and smaller cmdlet/language enhancements that improve the quality of life for users.
To see a full list of improvements, check out our detailed changelogs on GitHub.

Since the 6.1.0 release (September 2018), we’ve taken over 560 changes for the 6.2 release! That’s almost 4 changes a day (excluding weekends)! Of course, we have to thank our community for providing a significant portion of these improvements. Per our public PowerBI dashboard,
the community is still contributing just over half of all incoming pull requests!

Throughout the development of 6.2, the PowerShell Core team has also been focused on supporting PowerShell Core 6 in Azure Functions (more on this soon!), automating our release process (blog coming!), the v1.18.0 release of PSScriptAnalyzer, the 2.0.0-Preview release of the PowerShell Visual Studio Code extension, and, of course, the PowerShell Core 6.2 release.

Experimental Features

In the 6.1 release, we enabled support for Experimental Features which allow contributors and PowerShell Team members to deliver new features and get feedback before we consider the design complete and to avoid making breaking changes as the design evolves. It’s often easier to provide feedback by experimenting with working code than from reading a specification that describes the user experience.

In the 6.2 release, we have a number of Experimental Features you can try out. We’d love it if you can provide us with feedback on these so we can make improvements, decide whether it’s worth keeping, or promote it out of an experimental state.

At any time you can use Get-ExperimentalFeature to get a list of available experimental features that can be enabled or disabled with Enable/Disable-ExperimentalFeature.

Command Not Found Suggestions

Enable-ExperimentalFeature -Name PSCommandNotFoundSuggestion

This feature will use fuzzy matching to find suggestions of commands or cmdlets you may have meant to type if you made a typo.

PS> Get-Commnd
Get-Commnd : The term 'Get-Commnd' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ Get-Commnd
+ ~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (Get-Commnd:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException


Suggestion [4,General]: The most similar commands are: Get-Command, Get-Content, Get-Job, Get-Module, Get-Event, Get-Host, Get-Member, Get-Item, Set-Content.

In this example, I mistyped Get-Command and it fuzzy matched to a number of suggestions from most likely to least likely.

Implicit Remoting Batching

Enable-ExperimentalFeature -Name PSImplicitRemotingBatching

When using implicit remoting in a pipeline, PowerShell treats each command in the pipeline independently. This results in objects being serialized and de-serialized between the client
and target system repeatedly over the execution of the pipeline.

With this change, PowerShell analyzes the pipeline and determines if the command is safe to run or the command exists on the target system and is able to execute the entire pipeline remotely and only serialize and de-serialize the results back to the client.

This can result in significant performance gains! A real-world test of Get-Process | Sort-Object over localhost shows a decrease from 10-15 seconds to a 20-30 milliseconds, a speed increase of 300-750x. This should be even faster going over a real network connection, and only requires you to update your client (no changes to the server side are necessary).

Temp Drive

Enable-ExperimentalFeature -Name PSTempDrive

If you’re using PowerShell Core on different operating systems, you’ll discover that the environment variable for finding the temporary directory is different on Windows, macOS, and Linux! With this feature, you will get a PSDrive called Temp: that is automatically mapped to the temporary folder on whichever operating system you are using.

PS> "Hello World!" > Temp:/hello.txt
PS> Get-Content Temp:/hello.txt
Hello World!

Be aware that native file commands (like ls on Linux) are not aware of PSDrives and won’t see this Temp: drive.

Abbreviation Expansion

Enable-ExperimentalFeature -Name PSUseAbbreviationExpansion

PowerShell cmdlets are expected to have descriptive nouns. This can results in long names that can take time to type and make it easier to make typing mistakes. This feature allows you to just type the uppercase characters of the cmdlet and use tab-completion to find a match.

PS> i-arsavsf

If you hit tab, and have the Azure PowerShell Az module installed, it will autocomplete to:

PS> Import-AzRecoveryServicesAsrVaultSettingsFile

Note that this feature is intended to be used interactively so the abbreviated forms of cmdlets won’t work in scripts. This is not intended to be a replacement for aliases.

How can I provide feedback?

As always, you should file issues on GitHub to let us know about any features you’d like added or bugs that you encounter. Additionally, you can join us for the PowerShell Community Call
on the 3rd Thursday of every month.

Being an Open Source project, we value all types of contributions including code, tests, documentations, issues, and discussion.

We have an amazing active community and this release would not have been possible without you!

The Future

We are still working out our plans for the next release. Stay tuned for our roadmap to be published on this blog!

On behalf of the PowerShell Team,

Steve Lee
Principal Software Engineering Manager
PowerShell Team
https://twitter.com/Steve_MSFT

The post General Availability of PowerShell Core 6.2 appeared first on PowerShell.

Infrastructure + Security: Noteworthy News (March, 2019)

$
0
0

Hi there! Stanislav Belov here again to bring you the next issue of the Infrastructure + Security: Noteworthy News series! 

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis.

Microsoft Azure
Introducing the Azure portal “how to” video series
A new video weekly series highlights specific aspects of the Azure portal so you can be more efficient and productive while deploying your cloud workloads from the portal.
Announcing the general availability of Azure Lab Services
With Azure Lab Services, you can easily set up and provide on-demand access to preconfigured virtual machines (VMs) to teach a class, train professionals, run hackathons or hands-on labs, and more. Simply input what you need in a lab and let the service roll it out to your audience. Your users go to a single place to access all their VMs across multiple labs, and connect from there to learn, explore, and innovate.
Simplifying your environment setup while meeting compliance needs with built-in Azure Blueprints
To help our customers simplify the creation of their environments in Azure while successfully interpreting US and international governance requirements, we are announcing a series of built-in Blueprints Architectures that can be leveraged during your cloud-adoption journey. Azure Blueprints is a free service that helps customers deploy and update cloud environments in a repeatable manner using composable artifacts such as policies, deployment templates, and role-based access controls. This service is built to help customers set up governed Azure environments and can scale to support production implementations for large-scale migrations.
Instantly restore your Azure Virtual Machines using Azure Backup
Instant Restore helps Azure Backup customers quickly recover VMs from the snapshots stored along with the disks. In addition, users get complete flexibility in configuring the retention range of snapshots at the backup policy level depending on the requirements and criticality of the virtual machines associated, giving users more granular control over their resources.
Windows Server
Announcing Windows Admin Center Preview 1902

This preview release builds on the previous 1812 version and adds new functionality including all-new software defined networking tools in the HCI solution, and one of the top-requested customer features: shared connection lists. For folks that use RDCman, we have published a small script that you may use to export your saved RDCman connections to a .CSV file which you can then import with PowerShell to maintain all your RDCman grouping hierarchy using tags.

Windows Client
Making the transition to Windows 10 and Office 365

End of support means that your Windows 7 or Office 2010 software will no longer receive updates, including security updates. But, there’s good news – Windows 10 is the most secure Windows ever and Office 365 delivers the latest in personal productivity. Together they make a perfect pair to help you do everything you were doing before – safer, faster and easier.

Remote Server Administration Tools for Windows 10

Starting with Windows 10 October 2018 Update, RSAT is included as a set of “Features on Demand” right from Windows 10. Just go to “Manage optional features” in Settings and click “Add a feature” to see the list of available RSAT tools. Select and install the specific RSAT tools you need.

Security
The evolution of Microsoft Threat Protection, RSA edition

Microsoft Threat Protection is on a journey to provide organizations seamless, integrated, and comprehensive security across multiple attack vectors. In this RSA edition, we want to share where we are in this journey, the most recent new capabilities launched, and the vision of where we’re going as we continue executing toward our goal of offering best-in-class security for modern organizations.

Part 1 | Part 2

Microsoft Cloud App Security @RSAC 2019

Microsoft at RSA conference announced more than 15 new product capabilities for Microsoft Cloud App Security (MCAS). They are oriented around 4 major themes, as we continue to deliver a unique Cloud Access Security Broker (CASB) that is designed with security professionals in mind and continues to push industry boundaries by providing cutting edge capabilities, simplicity of deployment, centralized management, and innovative automation capabilities.

Announcing new cloud-based technology to empower cyber defenders

Cybersecurity is about people. The frontline defenders who stand between the promise of digital transformation and the daily reality of cyber-attacks need our help. At Microsoft, we’ve made it our mission to empower every person and organization on the planet to achieve more. Today that mission is focused on defenders. We are unveiling two new cloud-based technologies in Microsoft Azure Sentinel and Microsoft Threat Experts that empower security operations teams by reducing the noise, false alarms, time consuming tasks and complexity that are weighing them down. Let me start by sharing some insight into the modern defender experience.

Latest Microsoft Security Intelligence Report is available

The threat landscape is constantly changing. Stay on top of the latest trends that matter to you with our interactive security insights. Our threat researchers are sharing new data every month. On February 28, 2019 at 6 am PST, Microsoft published volume 24 of the Microsoft Security Intelligence Report(SIR).

IT Expert Roundtable: How Microsoft secures elevated access with tools and privileged credentials
Microsoft has been working to establish secure, isolated environments, credential management services and policies, and secure admin workstations to help protect mission-critical systems and services—including those used to manage cloud services, like Azure. Listen in as our experts answer questions about the strategies we use to help secure critical corporate assets and increase protection against emerging pass-the-hash attacks, credential theft, and credential reuse scenarios.
Windows Defender ATP’s EDR capability for Windows 7 and Windows 8.1 now generally available
With Windows 10 we’ve built the most secure Windows ever, by hardening the platform itself and by developing Windows Defender ATP – a unified endpoint security platform for preventative protection, post-breach detection, and automated investigation & response. To help customers stay secure while upgrading to Windows 10, we’ve built an EDR solution for Windows 7 and Windows 8.1 that is simple to deploy and seamless to end-users, providing behavioral based threat detection, investigation and response capabilities. Windows Defender ATP for Windows 7, and Windows 8.1 provides deep visibility on activities that are happening on endpoints, including process, file, network, registry and memory activities, providing security teams with rich, correlated insights into activities and threats happening on older versions of Windows.
Lessons learned from the Microsoft SOC—Part 1: Organization
We’re frequently asked how we operate our Security Operations Center (SOC) at Microsoft (particularly as organizations are integrating cloud into their enterprise estate). This is the first in a three part blog series designed to share our approach and experience, so you can use what we learned to improve your SOC.
New steps to protect Europe from continued cyber threats
On February 20th we expanded Microsoft AccountGuard to twelve new markets across Europe, providing comprehensive threat detection and notification to eligible organizations at no additional cost and customized help to secure their systems.
Securing privileged access for hybrid and cloud deployments in Azure AD
Traditional approaches that focus on securing the entrance and exit points of a network as the primary security perimeter are less effective due to the rise in the use of SaaS apps and personal devices on the Internet. The natural replacement for the network security perimeter in a complex modern enterprise is the authentication and authorization controls in an organization’s identity layer. Privileged administrative accounts are effectively in control of this new “security perimeter.” It’s critical to protect privileged access, regardless of whether the environment is on-premises, cloud, or hybrid on-premises and cloud hosted services. Protecting administrative access against determined adversaries requires you to take a complete and thoughtful approach to isolating your organization’s systems from risks.
Vulnerabilities and Updates
2019 SHA-2 Code Signing Support requirement for Windows and WSUS

To protect your security, Windows operating system updates are dual-signed using both the SHA-1 and SHA-2 hash algorithms to authenticate that updates come directly from Microsoft and were not tampered with during delivery. Due to weaknesses in the SHA-1 algorithm and to align to industry standards Microsoft will only sign Windows updates using the more secure SHA-2 algorithm exclusively. Customers running legacy OS versions (Windows 7 SP1, Windows Server 2008 R2 SP1 and Windows Server 2008 SP2) will be required to have SHA-2 code signing support installed on their devices by July 2019. Any devices without SHA-2 support will not be offered Windows updates after July 2019. To help prepare you for this change, we will release support for SHA-2 signing in 2019. Windows Server Update Services (WSUS) 3.0 SP2 will receive SHA-2 support to properly deliver SHA-2 signed updates. Refer to the Product Updates section for the migration timeline.

Now available: Microsoft System Center 2019!

As of March 14, 2019, we are pleased to let you know that System Center 2019 is generally available. Customers with a valid license of System Center 2019 can download media from the Volume Licensing Service Center (VLSC). We will also have the System Center 2019 evaluation available on the Microsoft Evaluation Center.

Support Lifecycle
Windows 10, version 1607 end of servicing on April 9, 2019

Windows 10, version 1607 for Education, Enterprise, and IoT Enterprise will reach the end of servicing on April 9, 2019. This means that version 1607, for these editions, will no longer receive security updates. Customers who contact Microsoft Support after the March update will be directed to the latest version of Windows 10 to remain supported.

Windows 7 support will end on January 14, 2020

Microsoft made a commitment to provide 10 years of product support for Windows 7 when it was released on October 22, 2009. When this 10-year period ends, Microsoft will discontinue Windows 7 support so that we can focus our investment on supporting newer technologies and great new experiences. The specific end of support day for Windows 7 will be January 14, 2020. After that, technical assistance and automatic updates that help protect your PC will no longer be made available for the product. Microsoft strongly recommends that you move to Windows 10 sometime before January 2020 to avoid a situation where you need service or support that is no longer available.

Extended Security Updates for SQL Server and Windows Server 2008/2008 R2: Frequently Asked Questions (PDF)

On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. That means the end of regular security updates. Don’t let your infrastructure and applications go unprotected. We’re here to help you migrate to current versions for greater security, performance and innovation.

Products reaching End of Support for 2019

Products reaching End of Support for 2020

Microsoft Premier Support News
WorkshopPLUS – Windows PowerShell Azure Resource Manager introduces you to the basics of common Microsoft Azure workloads, provides guidance and education for your IT engineers, using the power of PowerShell. This 3-day engagement includes an education sessions to help enhance your team’s technical and operational skills and help drive operational readiness, along with hands-on labs developed by Microsoft engineer, who works with you to create a working Proof of Concept (PoC) in your environment using AzureRM PowerShell commands.
The Windows Server 2019: New Features and Upgrade workshop provides students with a comprehensive introduction to the wide range of new and improved features in Windows Server 2019. Throughout the modules, we will provide a broad understanding on new or updated features and roles in Windows Server 2019.
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

Step 7. Discover shadow IT and take control of your cloud apps: Top 10 actions to secure your enviro

$
0
0

Cloud-based services have significantly increased productivity for today’s workforce, prompting users to adopt new cloud apps and services and making it a challenge for you to keep up. Microsoft Cloud App Security (MCAS), a cloud access security broker (CASB), helps you gain control over shadow IT with tools that give you visibility into the cloud apps and services used in your organization, asses them for risk, and provide sophisticated analytics. You can then make an informed decision about whether you want to sanction the apps you discover or block them from being accessed.

 

Read the full blog here.

Step 7. Discover shadow IT and take control of your cloud apps: Top 10 actions to secure your enviro

$
0
0

Cloud-based services have significantly increased productivity for today’s workforce, prompting users to adopt new cloud apps and services and making it a challenge for you to keep up. Microsoft Cloud App Security (MCAS), a cloud access security broker (CASB), helps you gain control over shadow IT with tools that give you visibility into the cloud apps and services used in your organization, asses them for risk, and provide sophisticated analytics. You can then make an informed decision about whether you want to sanction the apps you discover or block them from being accessed.

 

Step 7 photo.JPG

 

Read the full blog here.

Step 6. Manage mobile apps: top 10 actions to secure your environment

$
0
0

In our last blog, Step 5. Set up mobile device management, we introduced ContosoCars to illustrate the journey of implementing Intune as part of your UEM strategy. We continue their story to demonstrate how you can enhance endpoint security by managing mobile apps and tracking the deployment.

 

Step 6 photo.JPG

 

Read the full blog here.


LiveFyre commenting will no longer be available on the PowerShell Gallery

$
0
0

Commenting on the PowerShell Gallery is provided by LiveFyre–a third-party comment system. LiveFyre is no longer supported by Adobe and therefore we are unable to service issues as they arise. We have gotten reports of authentication failing for Twitter and Microsoft AAD and unfortunately we are unable to bring back those services. As we cannot predict when more issues will occur, and we cannot fix issues as they arise we must depreciate use of LiveFyre on the PowerShell Gallery. As of May 1st 2019 LiveFyre commenting will no longer be available on the PowerShell Gallery. Unfortunately we are unable to migrate comments off of LiveFyre so comment history will be lost.

How will package consumers be able to get support?

The other existing channels for getting support and contacting package owners will still be available on the Gallery. The left pane of the package page is the best place to get support. If you are looking to contact the package owner, select “Contact Owners” on the package page. If you are looking to contact Gallery support use the “Report” button. If the package owner has provided a link to their project site in their module manifest a link to their site is also available in the left pane and can be a good avenue for support. For more information on getting package support please see our documentation.

Questions

We appreciate your understanding as we undergo this transition.
Please direct any questions to sysmith@microsoft.com.

The post LiveFyre commenting will no longer be available on the PowerShell Gallery appeared first on PowerShell.

The PowerShell Gallery is now more Accessible

$
0
0

Over the past few months, the team has been working hard to make the PowerShell Gallery as accessible as possible. This blog details why it matters and what work has been done.

Why making the PowerShell Gallery more accessible was a priority

Accessible products change lives and allow everyone to be included in our product. Accessibility is also a major component of striving toward Microsoft’s mission to “Empower every person and every organization on the planet to achieve more.” Improvements in accessibility mean improvements in usability which makes the experience better for everyone. In doing accessibility testing for the Gallery, for example, we found that it was confusing for users to distinguish between “deleting” and “unlisting” packages. By clearly naming this action in the UI, it makes the process of unlisting a package more clear for all package owners.

The steps taken to make the PowerShell Gallery more accessible

The first part of the process focused on bug generation and resolution. We used scanning technology to ensure that the Gallery alerts and helper texts were configured properly, and were compatible with screen reading technology. We use Keros scanning which is Microsoft’s premier accessibility tool to identify accessibility issues and worked to triage and fix the detected issues.

For the second part of the process, we undertook a scenario-focused accessibility study. For the study, blind or visually impaired IT professionals went through core scenarios for using the Gallery. These scenarios included: finding packages, publishing packages, managing packages, and getting support. The majority of the scenarios focused on searching for packages as we believe this is the primary way customers interact with the Gallery. After the study concluded we reviewed the results and watched recordings of the participants navigating through our scenarios. This process allowed us to focus on improving our lowest performing scenarios by addressing specific usability improvements. After making these improvements we underwent a review by accessibility experts to assure we had high usability and accessibility.

Usability Improvements

  • Screen Reader Compatibility: Screen reader technologies make consuming web content accessible so we underwent thorough review, and improvement, to ensure that the Gallery was providing accurate, consistent, and helpful information to screen readers. Some examples of areas we improved:
    • Accurate Headers
    • Clearly labeled tables
    • Helpful tool tips
    • Labeled graph node points
  • Improved Aria Tags: Accessible Rich Internet Applications (Aria) is a specification that makes web content more accessible by passing helpful information to assistive technologies such as screen readers. We underwent a thorough review, and enhancement, of our Aria tags to make sure they were as helpful as possible. One improvement we made, for example, was an ARIA description explaining how to use tags in the Gallery search bar.
  • Renamed UI elements to be more descriptive: Through our review we noticed we were generating some confusion by labeling the unlist button as “delete” and we worked to fix these types of issues.
  • Filters: We added filters for the operating system to make it easier to find compatible packages.
  • Results description: we made searching for packages more straightforward by displaying the total number of results and pages.
  • Page Scrolling: we made searching for packages easier by adding multi-page scrolling.

Reporting Issues

Our goal is to make the Gallery completely user friendly. If you encounter any issues in the PowerShell Gallery that make it less accessible/usable we would love to hear about it on our GitHub page. Please file an issue letting us know what we can do to make the Gallery even more accessible.

The post The PowerShell Gallery is now more Accessible appeared first on PowerShell.

Secure your mobile email with Microsoft EMS and Microsoft Outlook for iOS and Android

$
0
0

 

(This post is co-authored by Adrian Moore, Senior Program Manager, and Mayunk Jain, Product Manager, Microsoft 365 Security)

 

 

Whether you have an official BYOD (bring your own device) policy or not, chances are you caught up on some work email this weekend on your mobile phone. If so, you’re not alone; more than 80% of employees admit using non-approved SaaS apps for work purposes, including mobile email. What is worth noting, is that 63% of confirmed data breaches involve weak, default, or stolen passwords. According to Verizon's 2018 Breach Investigations report, 92 percent of malware is still delivered by email

 

As an IT leader investing in Microsoft 365 modern workplace to meet cyber-security challenges head-on, secure email access is likely to be a key part of your strategy. In this article, we take a technical deep dive into the integrated approach of Microsoft Enterprise Mobility + Security (EMS) and Microsoft Outlook for iOS and Android devices, that we consider the gold standard of secure mobile email access.

How it works

Let us dig deeper and explore the configuration settings to deliver the rich experience of Microsoft secure mobile email. Check out the Sway below, or download the PDF

 

 

 

MSIX: Package Support Framework Part TWO – Preparation

$
0
0

___________________________________________________________________________________________________________________________

IMPORTANT ANNOUNCEMENT FOR OUR READERS!

AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at https://aka.ms/CISTechComm (hosted at https://techcommunity.microsoft.com). Please bear with us while we are still under construction!

We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either https://aka.ms/askpfeplat as you do today, or at our new site https://aka.ms/CISTechComm. Please feel free to update your bookmarks accordingly!

Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.

If you have never visited the TechCommunity site, it can be found at https://techcommunity.microsoft.com. On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.

NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at https://aka.ms/PFETechComm!

As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!

__________________________________________________________________________________________________________________________

 

Hi everyone, Ingmar Oosterhoff, Johannes Freundorfer, and Matthias Herfurth here continuing from our previous blog, which can be found at https://techcommunity.microsoft.com/t5/Core-Infrastructure-and-Security/MSIX-Package-Support-Framework-Part-One-The-Blueprint/ba-p/363594. In this post, we will now proceed with preparing our machine to make use of the PSF later on.

NOTE: To save on resources and creating additional Virtual Machines I’m going to be using my MSIX packaging machine for this (makes sense to me anyway).

Modifying an existing MSIX package is nice and easy using MakeAppx.exe. As MakeAppx.exe is part of the Microsoft Windows 10 SDK, the first step in our process will be to download it from: https://developer.microsoft.com/en-US/windows/downloads/windows-10-sdk

Once downloaded, install it, and select 2 components, as shown below:

Next, create some folders to use as working directories…

I’ve created a Resources folder in C: with a MakeAppx (for later use) and Nuget folder as subfolders as shown in the image below:

The next step in the process will be downloading the Package Support Framework using Nuget as follows.

  • Download Nuget from https://www.nuget.org/downloads
  • Save the nuget.exe in the c:resourcesNuget folder
  • Start cmd.exe
  • CD to the c:resourcesnuget folder
  • Run the following command: nuget install Microsoft.PackageSupportFramework

This will create a subfolder in c:resourcesNuget folder containing the framework

Snapshot your VM to allow you to revert to this state.

Now we have the tools in our next post, where we’ll download and compile a simple “made to break” ready to repair application, stay tuned!

Thanks for reading!

DSC Resource Kit Release April 2019

$
0
0

We just released the DSC Resource Kit!

This release includes updates to 13 DSC resource modules. In the past 6 weeks, 87 pull requests have been merged and 90 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • CertificateDsc
  • ComputerManagementDsc
  • NetworkingDsc
  • OfficeOnlineServerDsc
  • SecurityPolicyDsc
  • SharePointDsc
  • SqlServerDsc
  • StorageDsc
  • xActiveDirectory
  • xPSDesiredStateConfiguration
  • xSMBShare
  • xWindowsUpdate
  • xWinEventLog

xWebAdministration is also in the pipeline for release as soon as it passes all tests.

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

Our latest community call for the DSC Resource Kit was last Wednesday, March 27. A recording of the call with be posted on the PowerShell YouTube channel soon. You can join us for the next call at 12PM (Pacific time) on May 8 to ask questions and give feedback about your experience with the DSC Resource Kit.

The next DSC Resource Kit release will be on Wednesday, May 15.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don’t forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

Please see our documentation here for information on the support of these resource modules.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or CHANGELOG.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module Name Version Release Notes
CertificateDsc 4.5.0.0
  • Fix example publish to PowerShell Gallery by adding gallery_api environment variable to AppVeyor.yml – fixes Issue 187.
  • CertificateDsc.Common.psm1
    • Exclude assemblies that set DefinedTypes to null instead of an empty array to prevent failures on GetTypes(). This issue occurred with the Microsoft.WindowsAzure.Storage.dll assembly.
ComputerManagementDsc 6.3.0.0
  • Correct PSSA custom rule violations – fixes Issue 209.
  • Correct long example filenames for PowerShellExecutionPolicy examples.
  • Opted into Common Tests “Required Script Analyzer Rules”, “Flagged Script Analyzer Rules”, “New Error-Level Script Analyzer Rules” “Custom Script Analyzer Rules” and “Relative Path Length” – fixes Issue 152.
  • PowerPlan:
    • Added support to specify the desired power plan either as name or guid. Fixes Issue 59
    • Changed the resource so it uses Windows APIs instead of WMI/CIM (Workaround for Server 2012R2 Core, Nano Server, Server 2019 and Windows 10). Fixes Issue 155 and Issue 65
NetworkingDsc 7.1.0.0
  • New Resource: NetAdapterState to enable or disable a network adapter – fixes Issue 365
  • Fix example publish to PowerShell Gallery by adding gallery_api environment variable to AppVeyor.yml – fixes Issue 385.
  • MSFT_Proxy:
    • Fixed ProxyServer, ProxyServerExceptions and AutoConfigURL parameters so that they correctly support strings longer than 255 characters – fixes Issue 378.
OfficeOnlineServerDsc 1.3.0.0
  • Changes to OfficeOnlineServerDsc
    • Added pull request template and issue templates.
  • OfficeOnlineServerInstall
    • Added check to test if the setup file is blocked or not;
    • Added ability to install from a UNC path, by adding server to IE Local Intranet Zone. This will prevent an endless wait caused by security warning;
  • OfficeOnlineServerInstallLanguagePack
    • Added check to test if the setup file is blocked or not;
    • Added ability to install from a UNC path, by adding server to IE Local Intranet Zone. This will prevent an endless wait caused by security warning
SecurityPolicyDsc 2.8.0.0
  • Bug fix – Issue 71 – Issue Added Validation Attributes to AccountPolicy & SecurityOption
  • Bug fix – Network_security_Restrict_NTLM security option names now maps to correct keys. This fix could impact your systems.
  • Updated LICENSE file to match the Microsoft Open Source Team standard. Fixes Issue 108
  • Refactored the SID translation process to not throw a terminating error when called from Test-TargetResource
  • Updated verbose message during the SID translation process to identify the policy where an orphaned SID exists
  • Added the EType “FUTURE” to the security option “Network_security_Configure_encryption_types_allowed_for_Kerberos”
  • Documentation update to include all valid settings for security options and account policies
SharePointDsc 3.3.0.0
  • SharePointDsc generic
    • Implemented workaround for PSSA v1.18 issue. No further impact for the rest of the resources
    • Fixed issue where powershell session was never removed and leaded to memory leak
    • Added readme.md file to Examples folder, which directs users to the Wiki on Github
  • SPAppManagementServiceApp
    • Added ability to create Service App Proxy if this is not present
  • SPConfigWizard
    • Improved logging
  • SPFarm
    • Corrected issue where the resource would try to join a farm, even when the farm was not yet created
    • Fixed issue where an error was thrown when no DeveloperDashboard parameter was specfied
  • SPInstall
    • Added check to unblock setup file if it is blocked because it is coming from a network location. This to prevent endless wait
    • Added ability to install from a UNC path, by adding server to IE Local Intranet Zone. This will prevent an endless wait caused by security warning
  • SPInstallLanguagePack
    • Added check to unblock setup file if it is blocked because it is coming from a network location. This to prevent endless wait
    • Corrected issue with Norwegian language pack not being correctly detected
    • Added ability to install from a UNC path, by adding server to IE Local Intranet Zone. This will prevent an endless wait caused by security warning
  • SPProductUpdate
    • Added ability to install from a UNC path, by adding server to IE Local Intranet Zone. This will prevent an endless wait caused by security warning
    • Major refactor of this resource to remove the dependency on the existence of the farm. This allows the installation of product updates before farm creation.
  • SPSearchContentSource
    • Corrected typo that prevented a correct check for ContinuousCrawl
  • SPSearchServiceApp
    • Added possibility to manage AlertsEnabled setting
  • SPSelfServiceSiteCreation
    • Added new SharePoint 2019 properties
  • SPSitePropertyBag
    • Added new resource
  • SPWebAppThrottlingSettings
    • Fixed issue with ChangeLogRetentionDays not being applied
SqlServerDsc 12.4.0.0
  • Changes to SqlServerDsc
    • Added new resources.
      • SqlRSSetup
    • Added helper module DscResource.Common from the repository DscResource.Template.
      • Moved all helper functions from SqlServerDscHelper.psm1 to DscResource.Common.
      • Renamed Test-SqlDscParameterState to Test-DscParameterState.
      • New-TerminatingError error text for a missing localized message now matches the output even if the “missing localized message” localized message is also missing.
    • Added helper module DscResource.LocalizationHelper from the repository DscResource.Template, this replaces the helper module CommonResourceHelper.psm1.
    • Cleaned up unit tests, mostly around loading cmdlet stubs and loading classes stubs, but also some tests that were using some odd variants.
    • Fix all integration tests according to issue PowerShell/DscResource.Template14.
  • Changes to SqlServerMemory
    • Updated Cim Class to Win32_ComputerSystem (instead of Win32_PhysicalMemory) because the correct memory size was not being detected correctly on Azure VMs (issue 914).
  • Changes to SqlSetup
    • Split integration tests into two jobs, one for running integration tests for SQL Server 2016 and another for running integration test for SQL Server 2017 (issue 858).
    • Localized messages for Master Data Services no longer start and end with single quote.
    • When installing features a verbose message is written if a feature is found to already be installed. It no longer quietly removes the feature from the /FEATURES argument.
    • Cleaned up a bit in the tests, removed excessive piping.
    • Fixed minor typo in examples.
    • A new optional parameter FeatureFlag parameter was added to control breaking changes. Functionality added under a feature flag can be toggled on or off, and could be changed later to be the default. This way we can also make more of the new functionalities the default in the same breaking change release (issue 1105).
    • Added a new way of detecting if the shared feature CONN (Client Tools Connectivity, and SQL Client Connectivity SDK), BC (Client Tools Backwards Compatibility), and SDK (Client Tools SDK) is installed or not. The new functionality is used when the parameter FeatureFlag is set to "DetectionSharedFeatures" (issue 1105).
    • Added a new helper function Get-InstalledSharedFeatures to move out some of the code from the Get-TargetResource to make unit testing easier and faster.
    • Changed the logic of “Build the argument string to be passed to setup” to not quote the value if root directory is specified (issue 1254).
    • Moved some resource specific helper functions to the new helper module DscResource.Common so they can be shared with the new resource SqlRSSetup.
    • Improved verbose messages in Test-TargetResource function to more clearly tell if features are already installed or not.
    • Refactored unit tests for the functions Test-TargetResource and Set-TargetResource to improve testing speed.
    • Modified the Test-TargetResource and Set-TargetResource to not be case-sensitive when comparing feature names. This was handled correctly in real-world scenarios, but failed when running the unit tests (and testing casing).
  • Changes to SqlAGDatabase
    • Fix MatchDatabaseOwner to check for CONTROL SERVER, IMPERSONATE LOGIN, or CONTROL LOGIN permission in addition to IMPERSONATE ANY LOGIN.
    • Update and fix MatchDatabaseOwner help text.
  • Changes to SqlAG
    • Updated documentation on the behaviour of defaults as they only apply when creating a group.
  • Changes to SqlAGReplica
    • AvailabilityMode, BackupPriority, and FailoverMode defaults only apply when creating a replica not when making changes to an existing replica. Explicit parameters will still change existing replicas (issue 1244).
    • ReadOnlyRoutingList now gets updated without throwing an error on the first run (issue 518).
    • Test-Resource fixed to report whether ReadOnlyRoutingList desired state has been reached correctly (issue 1305).
  • Changes to SqlDatabaseDefaultLocation
    • No longer does the Test-TargetResource fail on the second test run when the backup file path was changed, and the path was ending with a backslash (issue 1307).
StorageDsc 4.6.0.0
  • Fix example publish to PowerShell Gallery by adding gallery_api environment variable to AppVeyor.yml – fixes Issue 202.
  • Added “DscResourcesToExport” to manifest to improve information in PowerShell Gallery and removed wildcards from “FunctionsToExport”, “CmdletsToExport”, “VariablesToExport” and “AliasesToExport” – fixes Issue 192.
  • Clean up module manifest to correct Author and Company – fixes Issue 191.
  • Correct unit tests for DiskAccessPath to test exact number of mocks called – fixes Issue 199.
  • Disk:
    • Added minimum timetowate of 3s after new-partition using the while loop. The problem occurs when the partition is created and the format-volume is attempted before the volume has completed. There appears to be no property to determine if the partition is sufficiently ready to format and it will often format as a raw volume when the error occurs – fixes Issue 85.
xActiveDirectory 2.25.0.0
  • Added xADReplicationSiteLink
    • New resource added to facilitate replication between AD sites
  • Updated xADObjectPermissionEntry to use AD: which is more generic when using Get-Acl and Set-Acl than using Microsoft.ActiveDirectory.ManagementActiveDirectory:://RootDSE/
  • Changes to xADComputer
    • Minor clean up of unit tests.
  • Changes to xADUser
    • Added TrustedForDelegation parameter to xADUser to support enabling/disabling Kerberos delegation
    • Minor clean up of unit tests.
  • Added Ensure Read property to xADDomainController to fix Get-TargetResource return bug (issue 155).
    • Updated readme and add release notes
  • Updated xADGroup to support group membership from multiple domains (issue 152). Robert Biddle (@robbiddle) and Jan-Hendrik Peters (@nyanhp)
xPSDesiredStateConfiguration 8.6.0.0
  • Fixes style inconsistencies in PublishModulesAndMofsToPullServer.psm1. issue 530
  • Suppresses forced Verbose output in MSFT_xArchive.EndToEnd.Tests.ps1, MSFT_xDSCWebService.Integration.tests.ps1, MSFT_xPackageResource.Integration.Tests.ps1, MSFT_xRemoteFile.Tests.ps1, MSFT_xUserResource.Integration.Tests.ps1, MSFT_xWindowsProcess.Integration.Tests.ps1, and xFileUpload.Integration.Tests.ps1. issue 514
  • Fixes issue in xGroupResource Integration tests where the tests would fail if the System.DirectoryServices.AccountManagement namespace was not loaded.
  • TestsIntegrationMSFT_xDSCWebService.Integration.tests.ps1:
    • Fixes issue where tests fail if a self signed certificate for DSC does not already exist. issue 581
  • Fixes all instances of the following PSScriptAnalyzer issues:
    • PSUseOutputTypeCorrectly
    • PSAvoidUsingConvertToSecureStringWithPlainText
    • PSPossibleIncorrectComparisonWithNull
    • PSAvoidDefaultValueForMandatoryParameter
    • PSAvoidUsingInvokeExpression
    • PSUseDeclaredVarsMoreThanAssignments
    • PSAvoidGlobalVars
  • xPackage and xMsiPackage
    • Add an ability to ignore a pending reboot if requested by package installation.
  • xRemoteFile
    • Updated MatchSource description in README.md. issue 409
    • Improved layout of MOF file to move description left.
    • Added function help for all functions.
    • Moved New-InvalidDataException to CommonResourceHelper.psm1. issue 544
  • Added full stops to the end of all functions help in CommonResourceHelper.psm1.
  • Added unit tests for New-InvalidArgumentException, New-InvalidDataException and New-InvalidOperationException CommonResourceHelper.psm1 functions.
  • Changes to MSFT_xDSCWebService
    • Fixed issue 528 : Unable to disable selfsigned certificates using AcceptSelfSignedCertificates=$false
    • Fixed issue 460 : Redeploy DSC Pull Server fails with error
  • Opt-in to the following Meta tests:
    • Common Tests – Custom Script Analyzer Rules
    • Common Tests – Flagged Script Analyzer Rules
    • Common Tests – New Error-Level Script Analyzer Rules
    • Common Tests – Relative Path Length
    • Common Tests – Required Script Analyzer Rules
    • Common Tests – Validate Markdown Links
  • Add .markdownlint.json file using settings from here as a starting point.
  • Changes to TestsUnitMSFT_xMsiPackage.Tests.ps1
    • Fixes issue where tests fail if executed from a drive other than C:. issue 573
  • Changes to TestsIntegrationxWindowsOptionalFeatureSet.Integration.Tests.ps1
    • Fixes issue where tests fail if a Windows Optional Feature that is expected to be disabled has a feature state of “DisabledWithPayloadRemoved”. issue 586
  • Changes to TestsUnitMSFT_xPackageResource.Tests.ps1
    • Fixes issue where tests fail if run from a folder that contains spaces. issue 580
  • Changes to test helper Enter-DscResourceTestEnvironment so that it only updates DSCResource.Tests when it is longer than 60 minutes since it was last pulled. This is to improve performance of test execution and reduce the likelihood of connectivity issues caused by inability to pull DSCResource.Tests. issue 505
  • Updated CommonTestHelper.psm1 to resolve style guideline violations.
  • Adds helper functions for use when creating test administrator user accounts, and updates the following tests to use credentials created with these functions:
    • MSFT_xScriptResource.Integration.Tests.ps1
    • MSFT_xServiceResource.Integration.Tests.ps1
    • MSFT_xWindowsProcess.Integration.Tests.ps1
    • xServiceSet.Integration.Tests.ps1
  • Fixes the following issues:
xSMBShare 2.2.0.0
  • Improved Code logic & cosmetic changes
  • Update appveyor.yml to use the default template.
  • Added default template files .codecov.yml, .gitattributes, and .gitignore, and .vscode folder.
  • Changes to xSmbShare
xWindowsUpdate 2.8.0.0
  • xWindowsUpdateAgent: Fixed verbose statement returning incorrect variable
  • Tests no longer fail on Assert-VerifiableMocks, these are now renamed to Assert-VerifiableMock (breaking change in Pester v4).
  • README.md has been updated with correct description of the resources (issue 58).
  • Updated appveyor.yml to use the correct parameters to call the test framework.
  • Update appveyor.yml to use the default template.
  • Added default template files .gitattributes, and .gitignore, and .vscode folder.
xWinEventLog 1.3.0.0
  • THIS MODULE HAS BEEN DEPRECATED. It will no longer be released. Please use the “WinEventLog” resource in ComputerManagementDsc instead.
  • Update appveyor.yml to use the default template.
  • Added default template files .codecov.yml, .gitattributes, and .gitignore, and .vscode folder.

How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module’s name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available starting in WMF 5.0) to find modules with DSC Resources:

# To list all modules that tagged as DSCResourceKit
Find-Module -Tag DSCResourceKit 
# To list all DSC resources from all sources 
Find-DscResource

Please note only those modules released by the PowerShell Team are currently considered part of the ‘DSC Resource Kit’ regardless of the presence of the ‘DSC Resource Kit’ tag in the PowerShell Gallery.

To find a specific module, go directly to its URL on the PowerShell Gallery:
http://www.powershellgallery.com/packages/< module name >
For example:
http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module -Name < module name >

For example:

Install-Module -Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

Update-Module

After installing modules, you can discover all DSC resources available to your local system with this command:

Get-DscResource

How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:
https://github.com/PowerShell/< module name >
For example, for the CertificateDsc module, go to:
https://github.com/PowerShell/CertificateDsc.

All DSC modules are also listed as submodules of the DscResources repository in the DscResources folder and the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:
https://github.com/PowerShell/< module name >/issues
For example:
https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Kragenbrink
Software Engineer
PowerShell DSC Team
@katiedsc (Twitter)
@kwirkykat (GitHub)

The post DSC Resource Kit Release April 2019 appeared first on PowerShell.

Part 3: Intune’s Journey to a Highly Scalable Globally Distributed Cloud Service

$
0
0

Over the last couple months I’ve been writing about Intune’s journey to become a globally scaled cloud service running on Azure.  I’m treating this as Part 3 (here’s Part 1 and Part 2) of a 4-part series.

 

Today, I’ll explain how we were able to make such dramatic improvements to our SLA’s, scale, performance, and engineering agility.

 

I think the things we learned while doing this can apply to any engineering team building a cloud service.

 

Last time, I noted the three major things we learned during the development process:

  1. Every data move that copies or moves data from one location to another must have data integrity checks to make sure that the copied data is consistent with the source data.  We discovered that there are a variety of efficient/intelligent ways to achieve this without requiring an excessive amount of time or memory. 
  2. It is a very bad idea to try building your own database for these purposes (No-SQL or SQL, etc), unless you are already in the database business.
  3. It’s far better to over-provision than over-optimize.  In our case, because we set our orange line thresholds low, we had sufficient time to react and re-architect.

After we rolled out our new architecture, we focused on evolving and optimizing our services/resources and improving agility.  We came up with 4 groups of goals to evolve quickly and at high quality:

  • Availability/SLAs
  • Scale
  • Performance
  • Engineering agility

Here’s how we did it:

 

#1: Availability/SLAs

The overarching goal we defined for availability/SLA (strictly speaking, SLO) was to achieve 4+ 9’s for all our Intune services.

 

Before we started the entire process describe by this blog series, less than 25% of our services were running at 4+ 9’s, and 90% were running at 3+ 9’s.

 

Clearly something needed to change.

 

First, a carefully selected group of engineers began a systematic review of where we needed to drive SLA improvements across the 150+ services.  Based on what we learned here, we saw, over the next six months, dramatic improvements.  This review uncovered a variety of hidden issues and the fixes we rolled out made a huge difference.  Here are a few of the big ones:

 

  • Retries:
    Our infrastructure supported a rudimentary form of retries and it needed some additional technical sophistication, specifically in terms of customized request timeouts. Initially, there was no way to cancel a request if it took more than a specified set time for a specific service. This meant that a request could never really be retried, because if a timeout happened, it most likely exceeded the threshold for the end-end operation.  To address this, we added a request timeout feature that enabled services to specify custom limits on the maximum time a request can take before being canceled. This allowed services to specify appropriate time limits and give them several other retry semantics (such as backoffs, etc.) within the bounds of the overall end-end operation. This was a huge improvement and it reduced our end-end timeouts by more than half.
  • Circuit breakers:
    It didn’t take long for us to realize that retries can cause a retry storm and result in timeouts becoming much worse. We added a circuit breaker pattern to handle this.
  • Caching:
    We started caching responses for repeated requests that matched the same criteria without breaking security boundaries.
  • Threading:
    During cold starts and request spikes, we noticed that the underlying framework (.NET) took time to spin off threads. To address this, we adjusted the minimum worker threads a service needs to maintain to account for these behaviors and made it configurable on a per-service basis.  This almost eliminated all the timeouts that happened during these spikes and/or cold starts.
  • Intelligent routing:
    This was a learning algorithm that determined the target service that had the best chance to succeed the request. This kind of routing avoided a hung or slow node, a deadlocked process, a slow network VM, and any other random issues experienced by the services. In a distributed cloud service operating at scale, these kinds of underlying issues must be expected and are more of a norm than an exception. This ended up being a critical feature for us to design and implement, and it made a huge difference across the board, especially when it came to reducing tail latencies.
  • Customized configurations:
    Each of our services had slightly different requirement or behavior, and it was important for us to provide knobs to customize certain settings for optimal behavior. Examples of such customized settings included: http server queue lengths, service point count, max pending accepts, etc.

The result of all the above efforts was phenomenal.  The chart below demonstrates this dramatic improvement after the changes were rolled out.

You’ll notice that we started with less than 25% of services at 4+ 9’s, and by the time we rolled out all the changes, 95% or more of our services were running at 4+ 9’s!  Today, Intune maintains 4+ 9’s for over 95% of our services across all our clusters around the world.

 

aaa.png

#2: Scale

The re-architecture process enabled us to primarily use scale out of the cluster to handle our growth. It was clear that the growth we were experiencing required us to additionally optimize in scale-up improvements.  The biggest workload for Intune is triggered when a device checks-in to the service in order to receive policies, settings, apps, etc. – and we chose this workload as our first target.

The scale target goal we set was 50k devices checking in within a short period (approximately 10 minutes) for a given cluster.  For reference, at the time we set this goal, our scale was at 3k devices in a 10-minute window for an individual cluster – in other words our scale had to increase by about 17x.  As with the SLA work we did, a group of engineers pursued this effort and acted as a single unit to tackle the problem.  Some of the issues they identified and improved included:

  • Batching:
    Some of the calls were made in a sequential manner and we identified a way for these calls to be batched together and sent in one request. This avoided multiple round trips and serialization/deserialization costs.
  • Service Instance Count:
    Some of the critical services in our cluster were running with an instance count. We realized that these were the first bottlenecks that prevented us from scaling up.  By simply increasing the instance count of the services without changing the node or cluster sizes we completely eliminated these bottlenecks.
  • Caching:
    Some of the properties in an account/tenant or user were frequently accessed. These properties were accessed by various different calls to the service(s) which held this data. We realized that we can cache these properties in the token that a request carried.  This eliminated the need for many calls to other services and the latencies or resource consumptions associated with them.
  • Reduce Calls:
    We developed several ways to reduce calls from one service to another. For example, we used a Bloom Filter to determine if a change happened, and then we used that information to reduce a load of about 1 million calls to approximately 10k
  • Leverage SLA improvements:
    We leveraged many of the improvements called out in the SLA section above, even though both efforts were operating (more or less) in parallel at the time. We also leveraged the customized configurations to experiment, learn, and test.

By the end of this exercise, we were successfully able to increase the scale from 3k devices checking-in to 70k+ device check-ins – an increase of more than 23x -- and we did this without scaling out the cluster!

#3: Performance

Our goal for performance had a very specific target:  Culture change.

We wanted to ensure that our performance was continuously evaluated in production and we wanted to be able to catch performance regressions before releasing to production.

To do this, we first used Azure profiler and associated flame graphs to perform continuous profiling in production.  This process showed our engineers how to drive several key improvements, and subsequently, it became a powerful daily tool for the engineers to determine bottlenecks in code, inefficiencies, high CPU usage, etc.  Some of the improvements identified by the engineering team as a result of this continuous profiling include:

  • Blocking Calls:
    Some of the calls made from one service to another were incorrectly blocking instead of following async patterns.  We fixed this by removing the blocking calls and making it asynchronous. They resulted in reduced timeouts and thread pool exhaustions.
  • Locking:
    Another pattern we noticed using the profiler was lock contention between threads.  We were clearly able to examine these via code that used the profiler’s call stacks to fix the bugs and remove the associated latencies.
  • High CPU:
    There were numerous instances where we were easily able to catch high CPU situations using the profiles and quickly determine root causes and fixes.
  • Tail latency:
    While investigating certain latencies associated with devices checking in or our portal flows, we noticed that some of the search requests were being sent across to all the partitions of a service. In many cases, there is just one partition that holds this data and the search can be performed against that single partition instead of fanning out across all of them.  We successfully made optimizations to do a search directly against the partition that held the data – and the result was a drop in latency from 200 msec to less than 15 msec (see chart below).  The end result was improved response times in devices checking in and faster data retrievals in our ITPro portal.

bbb.jpg

Our next action was to start a benchmark service that consistently and constantly ran high-traffic in our pre-production environments.  Our goal here was to catch performance regressions.  We also began running a consistent traffic load (that is equivalent to production loads) across all services in our pre-production environments.  We made a practice of considering a drop in our pre-production environment as a major blocker for production releases.

Together, both of these actions become a norm in the engineering organization, and we are proud of this positive culture change in meeting the performance goal.

#4: Engineering Agility

As called out in the first post in this series, Intune is composed of many independent and decoupled Service Fabric services. The development and deployment of these services, however, are genuinely monolithic in nature.  They deploy as a single unit, and all services are developed in a single large repo – essentially, a monolith.  This setup was an intentional decision when we started our modern service journey because a large portion of the team was focusing on the re-architecture effort and our cloud engineering maturity was not yet fully realized.  For these reasons we chose simplicity over agility.  As we dramatically developed the feature investments we were making (both in terms of the number of features and the number of engineers working them), we started experiencing agility issues.  The solution was decoupling the services in the monolith from development, deployment, and maintenance perspectives.  To do this we set three primary goals for improving agility:

  • Building a service should complete within minutes (this was down from 7+ hrs)
  • Pull requests should complete in minutes (down from 1+ day)
  • Deployments to our pre-prod environments should occur several times per day (down from once or twice per week)

As indicated above, our agility was initially hurting us when it came to rapidly delivering features.  Pull requests (PR) would sometimes take days to complete due to the aforementioned monolithic nature of the build environments – this meant that any change anywhere by anyone in Intune would impact everyone’s PR.  On any given day, the churn was so high that it was extremely hard to get stable builds and fast builds or PRs.  This, in turn, impacted our ability to deploy this massive build to our internal dogfood environments.  In the best case, we were able to deploy once or twice per week.  This, obviously, was not something we wanted to sustain.

We made an investment in developing and decoupling the monolithic services and improve our agility.  Over a period of 2+ years, we invested two major improvements:

  •  Move to individual GIT repos:
    Services moved from a proprietary source depot monolith branch to their own individual GIT repos. This decoupled development, PRs, unit and component tests, and builds.  The change resulted in build times getting completed in around 30 minutes – a huge difference from the previous 7-8 hours or more.
  • Carve out of Micro Services from Monolith:
    Services were carved out of the Service Fabric application and packaged into their own application, and they were turned into their own independent deployable unit. We referred to such an application as a micro service.

As this investment progressed and evolved, we started seeing huge benefits. The following demonstrate some of these:

  • Build/PR Times:
    For microservices, we reduced the time that a service typically completes a build to within 30 minutes from the previous 7+ hours. Similarly, the monolith saw an improvement to 2-3 hours from the 7 hours. A similar improvement happened in PR times as well, to a few minutes for micro services (from 1+ day).
  • Deployments to Pre-prod Dogfood Environments:
    With the monolith, successful deployments to pre-production dogfood environments would take us minimum of 1 day and, in some extreme cases, up to a week. With the investments above, we are now able to complete several deployments per day across the monolith and micro services.  This is primarily because of faster deployment times (due to the parallel deployments of micro services) and the number/volume of services that have been removed from the monolith into their own micro services.

The chart below demonstrates one such an example.  The black line shows that we went from single digits to 1000’s of deployments per month in production environments.  In pre-production dogfood environments, this was even higher – typically reaching 10’s of deployments per day across all the services in a single cluster.

ccc.png

 

Challenges:

Today, Intune is part monolith and part micro services.  Eventually, we expect to compose 40-50 micro services from the existing monolith.  There are challenges in managing micro services due to the way they are independently created and managed and we are developing tooling to address some of the micro service management issues.  For example, binary dependencies between micro services is an issue because of versioning issues.  To address this, we developed a dependency tool to identify conflicting or missing binary dependencies between micro services.  Automation is also important if a critical fix needs to be rolled out across all micro services in order to mitigate a common library issue.  Without proper tooling, it can also be very hard and time consuming to propagate the fix to all micro services.  Similarly, we are developing tooling to determine all the resources required by a micro service, as well as all the resource management aspects, such as key rotation, expiration, etc.

Learnings

There were 3 learnings from this experience that are applicable to any large-scale cloud service:

  1. It is critically important to set realistic and achievable goals for SLA and scale – and then be persistent and diligent in driving towards achieving these goals. The best outcomes happen when a set of engineers from across the org work together as a unit towards a common goal.  Once you have this in place, make incremental changes; the cumulative effect of all the small changes pays significant dividends over time.
  2. Continuous profiling is a critical element of cloud service performance. It helps in reducing resource consumption, tail latencies, and it indirectly benefits all runtime aspects of a service.
  3. Micro services help in improving agility. Proper tooling to handle patches, deployments, dependencies, and resource management are critical to deploy and operate micro services in a high-scale distributed cloud service.

 

Conclusion

The improvements that came about from this stage of our cloud journey have been incredibly encouraging, and we are proud of operating our services with high SLA and performance while also rapidly increasing the scale of our traffic and services.

The next stage of our evolution will be covered in Part-4 of this series:  A look at our efforts to make the Intune service even more reliable and efficient by ensuring that the rollout of new features produce minimal-to-no impact to existing feature usage by customers – all while continuing to improve our engineering agility.

The Next Release of PowerShell – PowerShell 7

$
0
0

Recently, the PowerShell Team shipped the Generally Available (GA) release of PowerShell Core 6.2. Since that release, we’ve already begun work on the next iteration!

We’re calling the next release PowerShell 7, the reasons for which will be explained in this blog post.

Why 7 and not 6.3?

PowerShell Core usage has grown significantly in the last two years. In particular, the bulk of our growth has come from Linux usage, an encouraging statistic given our investment in making PowerShell viable cross-platform.

image

However, we also can clearly see that our Windows usage has not been growing as significantly, surprising given that PowerShell was popularized on the Windows platform. We believe that this could be occurring because existing Windows PowerShell users have existing automation that is incompatible with PowerShell Core because of unsupported modules, assemblies, and APIs. These folks are unable to take advantage of PowerShell Core’s new features, increased performance, and bug fixes. To address this, we are renewing our efforts towards a full replacement of Windows PowerShell 5.1 with our next release.

This means that Windows PowerShell and PowerShell Core users will be able to use the same version of PowerShell to automate across Windows, Linux, and macOS and on Windows, and PowerShell 7 users will have a very high level of compatibility with Windows PowerShell modules they rely on today.

We’re also going to take the opportunity to simplify our references to PowerShell in documentation and product pages, dropping the “Core” in “PowerShell 7”. The PSEdition will still reflect Core, but this will only be a technical distinction in APIs and documentation where appropriate.

Note that the major version does not imply that we will be making significant breaking changes. While we took the opportunity to make some breaking changes in 6.0, many of those were compromises to ensure our compatibility on non-Windows platforms. Prior to that, Windows PowerShell historically updated its major version based on new versions of Windows rather than Semantic Versioning

.NET Core 3.0

PowerShell Core 6.1 brought compatibility with many built-in Windows PowerShell modules, and our estimation is that PowerShell 7 can attain compatibility with 90+% of the inbox Windows PowerShell modules by leveraging changes in .NET Core 3.0 that bring back many APIs required by modules built on .NET Framework so that they work with .NET Core runtime. For example, we expect Out-GridView to come back (for Windows only, though)!

A significant effort for PowerShell 7 is porting the PowerShell Core 6 code base to .NET Core 3.0 and also working with Windows partner teams to validate their modules against PowerShell 7.

Support Lifecycle Changes

Currently, PowerShell Core is under the Microsoft Modern Lifecycle Policy. This means that PowerShell Core 6 is fix-forward: we produce servicing releases for security fixes and critical bug fixes,
and you must install the latest stable version within 6 months of a new minor version release.

In PowerShell 7, we will align more closely with the .NET Core support lifecycle, enabling PowerShell 7 to have both LTS (Long Term Servicing) and non-LTS releases.

We will still have monthly Preview releases to get feedback early.

When do I get PowerShell 7?

The first Preview release of PowerShell 7 will likely be in May. Be aware, however, that this depends on completing integration and validation of PowerShell with .NET Core 3.0.

Since PowerShell 7 is aligned with the .NET Core timeline, we expect the generally available (GA) release to be some time after the GA of .NET Core 3.0.

What about shipping in Windows?

We are planning on eventually shipping PowerShell 7 in Windows as a side-by-side feature with Windows PowerShell 5.1, but we still need to work out some of the details on how you will manage this inbox version of PowerShell 7.

And since the .NET Core timeline doesn’t align with the Windows timeline, we can’t say right now when it will show up in a future version of Windows 10 or Windows Server.

What other features will be in PowerShell 7?

We haven’t closed on our feature planning yet, but expect another blog post relatively soon with a roadmap of our current feature level plans for PowerShell 7.

Steve Lee
https://twitter.com/Steve_MSFT
Principal Engineering Manager
PowerShell Team

The post The Next Release of PowerShell – PowerShell 7 appeared first on PowerShell.


PowerShell Core Release Improvements

$
0
0

Overview

For PowerShell Core, we basically had to build a new engineering system to build and release it. How we build it has evolved over time as we learn and our other teams have implemented features that make some tasks easier. We are finally at a state that we believe we can engineer a system that builds PowerShell Core for release with as little human interaction as necessary.

Current state

Before the changes described here, we had one build per platform. After the binaries were built, they had to be tested and then packaged into the various packages for release. This is done in a private Azure DevOps Pipelines instance. In this state, it took a good deal of people’s time to do a PowerShell Core release. Before these changes, it would take 3-4 people about a week to release PowerShell Core. During this time, the percentage of time people were focused on the release probably averaged 50%.

Goals

  1. Remain compliant with Microsoft and external standards we are required to follow.
  2. Automate as much of the build, test, and release process as possible.
    • This should significantly reduce the amount of human toil needed in each release.
  3. Hopefully, provide some tools or practices others can follow.

What we have done so far

  1. We ported our CI tests to Azure DevOps Pipelines.
    • We have used this in a release and we see that this allowed us to run at least those test in our private Azure DevOps Pipelines instance.
    • This saves us 2-4 man hours per release and a day or more of calendar time if all goes well.
  2. We have moved our release build definitions to YAML.
    • We have used this in a release and we see that this allows us to treat the release build as code and iterate more quickly.
    • This saves us 1-2 man hours per release, when we have done everything correctly.
  3. I have begun to merge the the different platform builds into one combined build.
    • We have not yet used this in a release but we believe this should allow us to have a single button that gets us ready to test.
    • This has not been in use long enough to determine how much time it will save.
  4. We have begun to automate our releases testing. Our release testing is very similar to our CI testing just across more distributions and versions of Windows. We plan to be able to run this through Azure DevOps Pipelines as well.
    • This has not been in use long enough to determine how much time it will save.
  5. We have automated the generation of the draft of the change log and categorizing the entries based on labels the maintainers apply to the PRs. After generation, the maintainers still need to review the change descriptions to make sure it makes sense in the change log.
    • This saves us 2-4 man hours per release.

Summary of improvements

After all these changes, we can now release with 2-3 people in 2 to 3 days, with an average of 25% time focusing on the release.

Details of the combined build

Azure DevOps Pipelines allows us to define complex build pipeline. The build will be complex but things like templates in Azure DevOps makes breaking it into a manageable pieces.

Although this design does not technically reduce the number of parts, one significant thing it does for us it put all of our artifacts, in one place. Having the artifacts in one place, reduces the input to the steps in the rest of the build such as test and release.

I’m not going to discuss it much, but in order to coordinate this work we are keeping diagram of the build. I’ll include it here. If you want me to post another blog on the details, please leave a comment.

diagram

What is left to do

  1. We still have to add the other various NuGet package build steps to the coordinated build.
  2. We need to automate functionality (CI tests) across a representative sample of supported platforms.
  3. It would be nice if we could enforce in GitHub the process that helps us automate the change log generation.
  4. We need to automate the release process including:
    • Automating package testing. For example, MSI, Zip, Deb, RPM, and Snap.
    • Automating the actual release to GitHub, mcr.microsoft.com, packages.microsoft.com and the Snap store.

Travis Plunk
Senior Software Engineer
PowerShell Team

The post PowerShell Core Release Improvements appeared first on PowerShell.

MSIX: Package Support Framework Part 3

$
0
0

___________________________________________________________________________________________________________________________

IMPORTANT ANNOUNCEMENT FOR OUR READERS!

AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at https://aka.ms/CISTechComm (hosted at https://techcommunity.microsoft.com). Please bear with us while we are still under construction!

We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either https://aka.ms/askpfeplat as you do today, or at our new site https://aka.ms/CISTechComm. Please feel free to update your bookmarks accordingly!

Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.

If you have never visited the TechCommunity site, it can be found at https://techcommunity.microsoft.com. On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.

NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at https://aka.ms/PFETechComm!

As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!

__________________________________________________________________________________________________________________________

Hi all! Johannes Freundorfer, Ingmar Oosterhoff, and Matthias Herfurth back again for part 3 of our series!

Using the tools downloaded to our Virtual Machine in the previous blog (https://techcommunity.microsoft.com/t5/Core-Infrastructure-and-Security/MSIX-Package-Support-Framework-Part-2-Preparation/ba-p/393864), we’re now going to fix a “made to break” application provided by Microsoft.

This application can be downloaded from Github and needs to be compiled with Visual Studio.

The sources of this application can be found here: https://github.com/Microsoft/MSIX-PackageSupportFramework

Choosing “Clone or Download” will allow you to download the whole set of files as a ZIP-container.

 

  1. Expand the. zip to the folder structure created in the previous blog. Once expanded the folder structure should look like this:

 

2. The next step is to review and compile the broken Sample application. To do this we need Microsoft Visual Studio. You could install that yourself or take the easy route and use the Quick create template in Hyper-V manager on your Windows 10 machine.

 

3. Select the Windows 10 dev environment VM, and a new VM with Visual Studio is up and running in minutes.

4. We can then copy our Resources folder over to this VM. Oh, and did you notice there is also a VM available with the MSIX Packaging Tool Environment?

5. Once Visual Studio is up and running, open the following file:

“C:resourcesNugetMSIX-PackageSupportFramework-mastersamplesPSFSamplePSFSample.sln”

6. As this is a project from an external source, some warnings will pop up. To be able to proceed you’ll need to accept those. If you missed some features during Visual Studio installation those will be installed afterwards. You’ll know you’ve succeeded as soon as you’re able to see a similar window in the screenshot below. One Solution called ‘PSFSample’ containing 4 Projects.

 

7. Right click on the PSFSamplePackage to show the options available. Select Build.

 

8. “Open Folder in File Explorer” will finally open a window showing you the resulting files.

Preparation is now complete for fixing the application in our next posts…

 

References:

MSIX – The MSIX Packaging Tool – Using the First Package (Part 1)

https://techcommunity.microsoft.com/t5/Core-Infrastructure-and-Security/MSIX-The-MSIX-Packaging-Tool-Using-the-first-package/ba-p/363553

MSIX Package Support Framework Part 2 – Preparation

https://techcommunity.microsoft.com/t5/Core-Infrastructure-and-Security/MSIX-Package-Support-Framework-Part-2-Preparation/ba-p/393864

 

Changes to Ticket-Granting Ticket (TGT) Delegation Across Trusts in Windows Server (AskPFEPlat edition)

$
0
0

Hello Everyone! Allen Sudbring here, Premier Field Engineer at Microsoft. Today I’m putting a post out to get some critical information to everyone who supports Windows Server and Active Directory Domain Services.

If you haven’t seen the KB article that this post references I encourage you to check out its content, I promise it’s important!

KB4490425 – Updates to TGT delegation across incoming trusts in Windows Server

With the introduction of Windows Server 2012, a new feature was added to Active Directory Domain Services that enforced the forest boundary for Kerberos unconstrained delegation. This allowed an administrator of a trusted forest to configure whether TGTs can be delegated to a service in the trusting forest. Unfortunately, an unsafe, default configuration exists within this feature when creating an inbound trust that could allow an attacker in the trusting forest to request the delegation of a TGT for an identity from the trusted forest.

So what does this all mean?

Let’s back up a little bit and do a brief explanation on Kerberos delegation.

There are three kinds of Kerberos delegation in Active Directory:

  • Unconstrained
    When a Domain Administrator configures a service’s account to be trusted for unconstrained delegation, that service has the ability to impersonate any user account to any other service. This is the most insecure delegation option, because a service could impersonate any user to any other service it likes. For a regular user account, not so bad, but for a Domain Admin or an Enterprise Admin, a rogue service could request information from the domain or change user account or group permissions in the name of the privileged account. For this reason, unconstrained Kerberos delegation is a high security risk.
  • Constrained
    First introduced with Windows Server 2003, constrained delegation allows an administrator to limit the services to which an impersonated account can connect to. Constrained delegation is difficult to configure and requires unique SPN’s to be registered as well as Domain Admin rights to implement. Constrained delegation cannot cross domain or forest boundaries.
  • Resource-based Constrained
    First introduced with Windows Server 2012, Resource-based constrained delegation improved on the constrained delegation introduced with Windows Server 2003. It eliminated the need for SPNs by switching to security descriptors. This removed the need for Domain Admin rights to implement and allowed server administrators of backend services to control which service principals can request Kerberos tickets for another user. Resource based allows delegation across domain and forest boundaries.

For more information on Kerberos delegation, refer to this documentation:

Kerberos Constrained Delegation Overview

All currently supported versions of Windows Server that are utilized for Active Directory Domain controllers have this vulnerability:

  • Windows Server 2008
  • Windows Server 2008 R2
  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2016
  • Windows Server 2019

 

Let’s say you are responsible for the Contoso forest and you have a partner who owns the Fabrikam forest whose resources your users use. How could an attacker in Fabrikam take advantage of this vulnerability?

First, they need to have the ability to configure a service they own to be trusted for unconstrained delegation. By default, this requires domain administrator privilege in the fabrikam.com domain.

Next, they need to get your user to authenticate their rogue service in your partner’s Fabrikam forest.

Now they have your user’s TGT which they can use to authenticate to any service as that user.

Technical Overview of the Vulnerability

As a consequence of this vulnerability an attacker who has control of a forest with an inbound trust to another forest can request a TGT for a user in the trusted forest by enabling unconstrained delegation on a service principal in the trusting forest. The attacker would need to convince the user to authenticate to the resource in the trusting forest thereby allowing the attacker to request a delegated TGT.

To mitigate this vulnerability, a netdom command can be executed that will disable TGT delegation.

EnableTGTDelegation flag is enabled on Windows Server 2008 and Windows Server 2008 R2 devices after installing the March 12, 2019 updates. Windows Server 2012 and higher, the EnableTGTDelegation flag is in the operating system out of the box.

TGT delegation across an incoming trust can be disabled by setting the EnableTGTDelegation flag to No on the trust using netdom.

netdom.exe trust fabrikam.com /domain:contoso.com /EnableTGTDelegation:No
  • This flag should be set in the trusted domain (such as contoso.com) for each trusting domain (such as fabrikam.com). After the flag is set, the trusted domain will no longer allow TGTs to be delegated to the trusting domatin.
  • The secure state is No.
  • Any application or service that relies on unconstrained delegation across forests will fail.

 

Starting with the March 2019 security updates, this ability was backported to Windows Server 2008 and 2008 R2. Below is the following timeline that Microsoft has announced to address this vulnerability:

  • March 12, 2019
    Ability to disable TGT delegation added to Windows Server 2008 and 2008 R2.

    The following workaround guidance is recommended if the update has been installed:

    knownissue

  • May 14, 2019
    An update will be released that will change the default behavior of EnableTGTDelegation to add a safe default configuration. If delegation is required across trusts, this flag should be set to Yes before the July 2019 updates are installed. After this update, any newly created trusts will have the new default of EnableTGTDelegation trust flag set to No.
  • July 9, 2019
    An update will be released that will force the trust flag on existing trusts and disable TGT delegation by default. Any trust that has been configured to continue using delegation after May 14, 2019 will not be affected.

 

The July 2019 update cycle is the one that could cause issues in an existing environment. After those month’s updates are installed, any existing forest trusts will have TGT delegation disabled by default. This could cause applications and services to fail that require unconstrained delegation across a trust. Because of the possibility of this issue affecting customers, it is recommended that you start evaluating applications and accounts that might be affected by this change as soon as possible.

To help determine if any applications or accounts are using the unsafe delegation, use the following resources:

  • PowerShell
    • A script has been created that can scan forests that have incoming trusts that allow TGT delegation.
    • Refer to this support article for the PowerShell code:
      KB4490425 – Updates to TGT delegation across incoming trusts in Windows Server
    • Copy and Paste the code from the support article into a file named Get-RiskyServiceAccountsByTrust.ps1
    • There are two options switches that the script can be executed with:
      • -Collect will output any principals that have unconstrained delegation.
        Get-RiskyServiceAccountByTrust.ps1 -Collect
        
      • -Collect -Scanall will output security principals that have unconstrained delegation and search across trusts that do not allow TGT delegation
        Get-RiskyServiceAccountByTrust.ps1 -Collect -ScanAll
        

      Example of Output:
      PoSHOutput.jpg

  • Event Viewer/Event Logs
    • In an Active Directory domain when a Kerberos ticket is issued, the domain controller logs security events. These events contain information about the target domain and can be utilized to determine whether unconstrained delegation is being used across incoming trusts.
      • Check for events that contain a TargetDomainName value that matches the trusted domain name.
      • Check for events that contain a TicketOptions value that contains the ok_as_delegate flag (0x00040000)
        TGTEventLogID.jpg

Next Steps

  • Update any Windows Server 2008 or 2008 R2 domain controllers with the March 2019 security updates as soon as possible. View known issues above before proceeding.
  • Determine the applications and accounts that could be affected now, and if there aren’t any, and a trust is in place, disable the delegation as soon as possible to be in a safe configuration.
    netdom.exe trust fabrikam.com /domain:contoso.com EnableTGTDelegation:No
    
  • Applications that rely on unconstrained delegation should be configured to use resource-constrained delegation. See Kerberos Constrained Delegation Overview for more information.
  • Once you have set the applications to resource-based constrained delegation, set the flag to No.
  • If it’s determined that applications or accounts do exist that require this delegation in the environment, then set the flag to Yes, BEFORE the July 2019 updates. This is not recommended and should be avoided.

Important Resources and Links

Acknowledgements

I would like to thank the following people for helping pull this post together and provide content:

  • Alan La Pietra – Microsoft
  • David Loder – Microsoft
  • Steve Syfuhs – Microsoft
  • Brandon Wilson – Microsoft
  • Michiko Short – Microsoft
  • Paul Miller – Microsoft

 

LDAP Reconnaissance – the foundation of Active Directory attacks

$
0
0

When an attacker manages to break into an on-premises domain environment, one of the first steps they normally take is to gather information and perform domain reconnaissance. Reconnaissance involves identifying the users, resources and computers in the domain and then building an understanding of how those resources are used to form your domain environment.  

 

While an attacker can gather data without credentials, research has revealed that most of the time, attackers make use of normal, non-privileged, domain user rights to make their moves. 

 

recon1.pngFigure 1 - Bloodhound generated graph used to find a Domain Admin (source: https://wald0.com/?p=68)

 

How do LDAP-based attacks succeed if security is in place?  

 

In most environments, every account in the domain has the permissions needed to perform reconnaissance using the LDAP protocol, and LDAP is deployed as a default part of domain controller services. With the default configuration in place, any domain user can retrieve domain configurations, such as where exchange servers are installed, or get account related details, such as Domain Admin group membership lists, as well as details about which account can delegate authentication, what users have a Kerberos principal name, and more. 

 

Aside from user accounts, most on-premises domain services use LDAP as a key element for their basic functionality, and group policies are sent to every domain computer over LDAP.   

 

Attackers are known to use LDAP queries to visually map the domain environment using publicly available tools, such as PowerView and BloodHound to implement queries. These tools help get all users, groups, computer accounts and account access control lists (ACL) in the environment. Once the data collected is parsed, it is stored in a graph database and used to build a visual graph that displays the edges between the different accounts, helping the attackers determine and plan their moves laterally in the domain. 

 

Adding standard user account risk to LDAP group policy exposure, you can quickly start to see where LDAP is a potential attack gold mine. By exploiting your LDAP exposure and risk points, attackers find sensitive groups memberships, vulnerable services and map domain account relationships by exploiting any user permissions they can breach or find in your domain. 

 

A single point of failure on a standard user account can be the start of a large-scale breach.

 

There are also other types of attacks that can be initiated with an LDAP query. Attackers can initiate an internal phishing campaign by enumerating users in Finance or IT groups, harvest private phone numbers that allow them to send phishing links by text message, and find local administrators on end-points computers by retrieving and parsing group polices.

 

With so many methods and possible attack surfaces, can your domain be protected from LDAP risks?

 

YES!

 

To protect your domain, your organization must be able to:

  1. Define and differentiate between legitimate and malicious activity
  2. Identify and investigate activity sources and intentions
  3. Correlate related activities from the same sources
  4. Discover and remediate compromised accounts

 

Unprotected LDAP risks leave your entire organization at risk.

 

Backed by deep data learning modules, Azure Advanced Threat Protection now provides comprehensive LDAP alerts that learn and surface abnormal activities, identify and aid investigation of attack sources, provides correlation of events and suggest remediation steps for compromised accounts.

 

recon2.pngFigure 2 - Azure Advanced Threat Protection Security principal reconnaissance (LDAP) alert

 

As our security research team continues to develop and refine our threat protection modules and alerts, we welcome your feedback about our work and the security threats and attacks you encounter. We’re excited to hear from you and learn how we can help. 

 

 

Get Started Today

 

If you are just starting your journey, begin trials of the Microsoft Threat Protection services today to experience the benefits of the most comprehensive, integrated, and secure threat protection solution for the modern workplace:

Detecting LDAP based Kerberoasting with Azure ATP

$
0
0

In a typical Kerberoasting attack, attackers exploit LDAP vulnerabilities to generate a list of all user accounts with a Kerberos Service Principal Name (SPN) available. Once successful at listing these accounts, attackers grant Kerberos Service Tickets for each user account with an SPN and later perform offline Brute Force on the encrypted part of the Kerberos tickets. This action helps attackers locate a password that belongs to a domain account. Domain account passwords enable attackers to freely move laterally in your domain.

 

Environments where the Kerberos Ticket Granting Service (TGS) is encrypted with a weak cipher, and the cipher is generated from a well-known password (not randomly generated) are prime targets for successful brute force attacks of this type.  

 

The following attack logic is often used to find an organization's weakest link and perform LDAP based Kerberoast attacks.

 

Picture1.pngFigure 1-Typical Kerberoasting attack flow

 

Typical LDAP based Kerberoasting attack flow and result: 

 

Step 1: Identify

 

In this attack phase, attackers are using LDAP to query and locate all user accounts with a Service Principal Name (SPN). Running this LDAP query is possible for all user accounts in a domain.

 

Picture2.pngFigure 2- LDAP query that looks for all user accounts with a SPN set

Step 2: Enumerate

In this phase of the attack, a request is made for Kerberos TGS to the SPN using a valid TGT.

 

Fig3.pngFigure 3- TGS request to ExampleService of user1 by user2

Fig4.pngFigure 4 - TGS response with ticket to ExampleService of user1

 

Step 3: Brute force

 

In the brute force phase of the attack, by using commonly available password cracking tools on accounts with commonly used passwords, attackers easily succeed at obtaining the password.

 

In the following example, a commonly used password cracking tool, JohnTheRipper, performs a successful brute force using a rainbow table.  

 

images.pngFigure 5 - Cracked password using a rainbow table

Step 4: Attack  

 

In cases where the attempted brute force attack (shown previously) is successful, attackers use the newly obtained clear-text password to login to remote machines or access cloud resources and files.

 

images2.jpgFigure 6 - Interactive clear-text logon

How can you detect and prevent Kerberoast attacks from succeeding? 

Azure Advanced Threat Protection (Azure ATP) has risen to the Kerberoasting challenge and developed new methods to detect when malicious actors are attempting to perform LDAP based reconnaissance on your domain. While this type of attack is difficult to detect, and LDAP’s extensive query language presented additional challenges, our security research work involved differentiating legitimate workflows from malicious behavior and surfacing all related activities and entities.

Our newest security alert involves smart behavioral detection backed by extensive machine learning, designed to raise an alert when any type of abnormal enumeration (including SPN enumeration), or queries on sensitive security groups are detected.  

 

Starting from v2.72, Azure ATP issues a Security principal reconnaissance (LDAP) alert when the first stage of a Kerberoasting attack attempt is detected on the domains we monitor.  

 

Each alert includes vital information for use in your investigation and remediation:

 

1. Identification of malicious activity

2. Attempted enumeration details and specifics

3. Historical comparisons and activity correlation

4. Suggestion remediation steps 

 

images3.png

The following workflow explains how to use Azure ATP alerts to detect and remediate Kerberoasting attempts on your domain.

 

Step 1: Review the alert to identify the actors and entities involved.

 

images4.pngFigure 7 - Azure ATP alert on suspicious enumerations 

 

Step 2: Filter activities to review resource access on the entity involved

 

images5.pngFigure 8 - Filter for resource access activities on Client1's profile

 

Step 3: Use the filter results to investigate the resource access activities

 

images6.pngFigure 9 - Investigate the resource access activity (generated by Kerberos Ticket Granting Service) for ExampleService/User1

Step 4: Filter Interactive logon and Credential validation for the accessed entity

 

images7.pngFigure 10 - Filter Interactive logon and Credential validation on User1’s profile

Step 5: Review logon and access attempts

 

images8.pngFigure 11 - User1's clear text password was used to logon on interactively on Client2

Step 6: Remediate possible risks

  1. Force a password reset on the compromised account
  2. Require use of long and complex passwords for users with service principal accounts https://docs.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/minimum-password-length
  3. Replace the user account by Group Managed Service Account (gMSA) https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview

 

Kerberoasting remains a popular attack method and heavily discussed security issue, but the effects of a successful Kerberoasting attack are real. Make sure your security team is aware of common Kerberoasting risks and strategies, along with the tools and alerts Azure ATP offers to help protect your domain.

 

As always, we welcome your feedback about our work, and are interested in learning more about the security threats and risks you encounter. For more information about features and threat protection, or to learn how we can help, contact us

 

Get Started Today

 

If you are just starting your journey, begin trials of the Microsoft Threat Protection services today to experience the benefits of the most comprehensive, integrated, and secure threat protection solution for the modern workplace:

 

 

 

Viewing all 5932 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>