Quantcast
Channel: Category Name
Viewing all 5932 articles
Browse latest View live

Windows Server 2016/2019 Cluster Resource / Resource Types

$
0
0

Over the years, we have been asked about what some of the Failover Cluster resources/resource types are and what they do. There are several resources that have been asked about on multiple occasions and we haven’t really had a good definition to point you to. Well, not anymore.

What I want to do with this blog is define what they are, what they do, and when they were added (or removed). I am only going to cover the in-box resource types that come with Failover Clustering. But first, I wanted to explain what a cluster “resource” and “resource types” are.

Cluster resources are physical or logical entities, such as a file share, disk, or IP Address managed by the Cluster Service. The operating system does not distinguish between cluster and local resources. Resources may provide a service to clients or be an integral part of the cluster. Examples of resources would be physical hardware devices such as disk drives, or logical items such as IP addresses, network names, applications, and services. They are the basic and smallest configurable unit managed by the Cluster Service. A resource can only run on a single node in a cluster at a time.

Cluster resource types are dynamic library plug-ins. These Resource DLLs are responsible for carrying out most operations on cluster resources. A resource DLL is characterized as follows:

  • It contains the resource-specific code necessary to provide high availability for instances of one or more resource types.
  • It exposes this code through a standard interface consisting of a set of entry point functions.
  • It is registered with the Cluster service to associate one or more resource type names with the name of the DLL.
  • It is always loaded into a Resource Monitor’s process when in use.

When the Cluster service needs to perform an operation on a resource, it sends the request to the Resource Monitor assigned to the resource. If the Resource Monitor does not have a DLL in its process that can handle that type of resource, it uses the registration information to load the DLL associated with the resource type. It then passes the Cluster service’s request to one of the DLL’s entry point functions. The resource DLL handles the details of the operation so as to meet the specific needs of the resource.

You can define your own resource types to provide customized support for cluster-unaware applications, enhanced support for cluster-aware applications, or specialized support for new kinds of devices. For more information, see Creating Resource Types.

All resource types that are available in a Failover Cluster can be seen by right-mouse clicking on the name of the Cluster, choosing Properties, and selecting the Resource Types tab (shown below).

You can also get a list from running the PowerShell command Get-ClusterResourceType. Please keep in mind that all resource types may not show up or have access to. For example, if the Hyper-V role is not installed, the virtual machine resource types will not be available.

So enough about this, let’s get to the resource types, when they were available and, for some, when they were last seen.

Since there are multiple versions of Windows Clustering, this blog will only focus on the two latest versions (Windows Server 2016 and 2019).

 

Windows Server 2016 / 2019

Cloud Witness (clusres.dll): Cloud Witness is a quorum witness that leverages Microsoft Azure as the arbitration point. It uses Azure Blob Storage to read/write a blob file which is then used as an arbitration point in case of split-brain resolution.

DFS Replicated Folder (dfsrclus.dll): Manages a Distributed File System (DFS) replicated folder. When creating a DFS, this resource type is configured to ensure proper replication occurs. For more information regarding this, please refer to the 3-part blog series on the topic.

DHCP Service (clnetres.dll): The DHCP Service resource type supports the Dynamic Host Configuration Protocol (DHCP) Service as a cluster resource. There can be only one instance of a resource of this type in the cluster (that is, a cluster can support only one DHCP Service). Dynamic Host Configuration Protocol (DHCP) is a client/server protocol that automatically provides an Internet Protocol (IP) host with its IP address and other related configuration information such as the subnet mask and default gateway. RFCs 2131 and 2132 define DHCP as an Internet Engineering Task Force (IETF) standard based on Bootstrap Protocol (BOOTP), a protocol with which DHCP shares many implementation details. DHCP allows hosts to obtain required TCP/IP configuration information from a DHCP server.

Disjoint IPv4 Address (clusres.dll): IPv4 Resource type that can be used if setting up a site to site VPN Gateway. It can only be configured by PowerShell, not by the Failover Cluster Manager, the GUI tool on Windows Server. We added two IP addresses of this resource type, one for the internal network and one for the external network.

  • The internal address is plumbed down for the cluster network that is identified by Routing Domain ID and VLAN number. Remember, we mapped them to the internal network adapters on the Hyper-V hosts earlier. It should be noted that this address is the default gateway address for all machines on the internal network that need to connect to Azure.
  • The external address is plumbed down for the cluster network that is identified by the network adapter name. Remember, we renamed the external network adapter to “Internet” on both virtual machines.

Disjoint IPv6 Address (clusres.dll): IPv6 Resource type that can be used if setting up a site to site VPN Gateway. It can only be configured by PowerShell, not by the Failover Cluster Manager, the GUI tool on Windows Server. We added two IP addresses of this resource type, one for the internal network and one for the external network.

  • The internal address is plumbed down for the cluster network that is identified by Routing Domain ID and VLAN number. Remember, we mapped them to the internal network adapters on the Hyper-V hosts earlier. It should be noted that this address is the default gateway address for all machines on the internal network that need to connect to Azure.
  • The external address is plumbed down for the cluster network that is identified by the network adapter name. Remember, we renamed the external network adapter to “Internet” on both virtual machines.

Ras Cluster Resource (rasclusterres.dll): This resource object specifies where the site-to-site VPN configuration is stored. The file share can be anywhere the two virtual machines have read / write access to. It can only be configured by PowerShell, not by the Failover Cluster Manager, the GUI tool on Windows Server. This resource type is only available after installing the VPN Roles in Windows Server.

Distributed File System (clusres2.dll): Manages a Distributed File System (DFS) as a cluster resource. When creating a DFS, this resource type is configured to ensure proper replication occurs. For more information regarding this, please refer to the 3-part blog series on the topic.

Distributed Transaction Coordinator (mtxclu.dll): The Distributed Transaction Coordinator (DTC) resource type supports the Microsoft Distributed Transaction Coordinator (MSDTC). MSDTC is a Windows service providing transaction infrastructure for distributed systems, such as SQL Server. In this case, a transaction means a general way of structuring the interactions between autonomous agents in a distributed system.

File Server (clusres2.dll): Manages the shares that are created as highly available. A file share is a location on the network where clients connect to access data, including documents, programs, images, etc.

File Share Witness (clusres.dll): A File Share Witness is a witness (quorum) resource and is simply a file share created on a completely separate server from the cluster for tie-breaker scenarios when quorum needs to be established. A File Share Witness does not store cluster configuration data like a disk. It does, however, contain information about which version of the cluster configuration database is most recent.

Generic Application (clusres2.dll): The Generic Application resource type manages cluster-unaware applications as cluster resources, as well as cluster-aware applications that are not associated with custom resource types. The Generic Application resource DLL provides only very basic application control. For example, it checks for application failure by determining whether the application’s process still exists and takes the application offline by terminating the process.

Generic Script (clusres2.dll): The Generic Script resource type works in conjunction with a script that you must provide to manage an application or service as a highly available cluster resource. In effect, the Generic Script resource type allows you to script your own resource DLL. For more information on how to use the Generic Script resource type, see Using the Generic Script Resource Type.

Generic Service (clusres2.dll): The Generic Service resource type manages services as cluster resources. Similar to the Generic Application resource type, the Generic Service resource type provides only the most basic functionality. For example, the failure of a Generic Service resource is determined by a query of the Service Control Manager (SCM). If the service is running, it is presumed to be online. To provide greater functionality, you can define a custom resource type (for information, see Creating Resource Types).

A generic service resource type is usually used to manage a stateless service as a cluster resource, which can be failed over. However, generic services don’t provide much state information other than their online state, so if they have an issue that doesn’t cause the resource to go offline, it is more difficult to detect a service failure.

Generic services should only be used when all of the following conditions are true; otherwise, you should create a resource DLL.

  • The resource is not a device. The generic resource types are not designed to manage hardware.
  • The resource is stateless.
  • The resource is not dependent on other resources.
  • The resource does not have unique attributes that should be managed with private properties.
  • The resource does not have special functionality that should be exposed through control codes.
  • The resource can be started and stopped easily without using special procedures.

Health Service (healthres.dll): The Health Service constantly monitors your Storage Spaces Direct cluster to detect problems and generate “faults”. Through either Windows Admin Center or PowerShell, it displays any current faults, allowing you to easily verify the health of your deployment without looking at every entity or feature in turn. Faults are designed to be precise, easy to understand, and actionable.

Each fault contains five important fields:

  • Severity
  • Description of the problem
  • Recommended next step(s) to address the problem
  • Identifying information for the faulting entity
  • Its physical location (if applicable)

IP Address (clusres.dll): The IP Address resource type is used to manage Internet Protocol (IP) network addresses. When an IP Address resource is included in a group with a Network Name resource, the group can be accessed by network clients as a failover cluster instance (formerly known as a virtual server).

IPv6 Address (clusres.dll): The IPv6 Address resource type is used to manage Internet Protocol version 6 (IPv6) network addresses. When an IPv6 Address resource is included in a group with a Network Name resource, the group can be accessed by network clients as a failover cluster instance (formerly known as a virtual server).

IPv6 Tunnel Address (clusres2.dll): The IPv6 Tunnel Address resource type is used to manage Internet Protocol version 6 (IPv6) network tunnel addresses. When an IPv6 Tunnel Address resource is included in a group with a Network Name resource, the group can be accessed by network clients as a failover cluster instance (formerly known as a virtual server).

iSCSI Target Server (wtclusres.dll): Creates a highly available ISCSI Target server for machines to connect to for drives.

Microsoft iSNS (isnsclusres.dll): Manages an Internet Storage Name Service (iSNS) server. iSNS provides discovery services for Internet Small Computer System Interface (ISCSI) storage area networks. iSNS processes registration requests, deregistration requests, and queries from iSNS clients. We would recommend not using this resource type moving forward as it is being removed from the product.

MSMQ (mqclus.dll): Message Queuing (MSMQ) technology enables applications running at different times to communicate across heterogeneous networks and systems that may be temporarily offline. Applications send messages to queues and read messages from queues.

MSMQTriggers (mqtgclus.dll): Message Queuing triggers allow you to associate the arrival of incoming messages at a destination queue with the functionality of one or more COM components or stand-alone executable programs. These triggers can be used to define business rules that can be invoked when a message arrives at the queue without doing any additional programming. Application developers no longer must write any infrastructure code to provide this kind of message-handling functionality.

Network File System (nfsres.dll): NFS cluster resource has dependency on one Network Name resource and can also depend on one or more disk resources in a resource group. For a give network name resource there can be only one NFS resource in a resource group. The dependent disk resource hosts one or more of NFS shared paths. The shares hosted on a NFS resource are scoped to the dependent network name resources. Shares scoped to one network name are not visible to clients that mount using other network names or node names residing on the same cluster.

Network Name (clusres.dll): The Network Name resource type is used to provide an alternate computer name for an entity that exists on a network. When included in a group with an IP Address resource, a Network Name resource provides an identity to the role, allowing the role to be accessed by network clients as a Failover Cluster instance.

Distributed Network Name (clusres.dll):  A Distributed Network Name is a name in the Cluster that does not use a clustered IP Address.  It is a name that is published in DNS using the IP Addresses of all the nodes in the Cluster.  Client connectivity to this type name is reliant on DNS round robin.  In Azure, this type name can be used in leiu of having the need for an Internal Load Balancer (ILB) address.

Physical Disk (clusres.dll): The Physical Disk resource type manages a disk on a shared bus connected to two or more cluster nodes. Some groups may contain one or more Physical Disk resources as dependencies for other resources in the group. On a Storage Spaces Direct cluster, the disks are local to each of the nodes.

Hyper-V Network Virtualization Provider Address (provideraddressresource.dll): The IP address assigned by the hosting provider or the datacenter administrators based on their physical network infrastructure. The PA appears in the packets on the network that are exchanged with the server running Hyper-V that is hosting network virtualized virtual machine(s). The PA is visible on the physical network, but not to the virtual machines.

Scale Out File Server (clusres.dll): A Scale Out File Server (SOFS) is a share that can be accessed by any of the nodes  It uses the Distributed Network Name.

Storage Pool (clusres.dll): Manages a storage pool resource.  It allows for the creation and deletion of storage spaces virtual disks.

Storage QoS Policy Manager (clusres.dll): A resource type for the Policy Manger that collects the performance of storage resources allocated to the individual highly available virtual machines. It monitors the activity to help ensure storage is used fairly within I/O performance established through any policies that may be configured.

Storage Replica (wvrres.dll): Storage Replica is Windows Server technology that enables replication of volumes between servers or clusters for disaster recovery. This resource type enables you to create stretch failover clusters that span two sites, with all nodes staying in sync. A Stretch Cluster allows configuration of computers and storage in a single cluster, where some nodes share one set of asymmetric storage and some nodes share another, then synchronously or asynchronously replicate with site awareness. By stretching clusters, workloads can be run in multiple datacenters for quicker data access by local proximity users and applications, as well as better load distribution and use of compute resources.

Task Scheduler (clusres.dll): Task Scheduler is a resource that is tied to tasks you wish to run against the Cluster. Clustered tasks are not created or shown in Failover Cluster Manager. To create or view a Clustered Scheduled Task, you would need to use PowerShell.

Virtual Machine (vmclusres.dll): The Virtual Machine resource type is used to control the state of a virtual machine (VM). The following table shows the mapping between the state of the VM (indicated by the EnabledState property of the Msvm_ComputerSystem instance representing the VM) and the state of the Virtual Machine resource (indicated by the State property of the MSCluster_Resource class or the return of GetClusterResourceState function).

 

VM State Virtual Machine resource state
Disabled 3
Offline 3
Suspended 32769
Starting 32770
Online Pending 129
Online 2
Stopping 32774
Offline Pending 130
Saving 32773
Enabled 2
Paused 32768
Pausing 32776
Resuming 32777

Virtual Machine Cluster WMI (vmclusres.dll): The Virtual Machine Cluster WMI resource type is one used when virtual machine grouping (also known as virtual machine sets) has been configured. By grouping virtual machines together, managing the “group” is much easier than all of the virtual machines individually. VM Groups enable checkpoints, backup and replication of VMs that form a guest-cluster and that use a Shared VHDX.

Virtual Machine Configuration (vmclusres.dll): The Virtual Machine Configuration resource type is used to control the state of a virtual machine configuration.

Virtual Machine Replication Broker (vmclusres.dll): Replication broker is a prerequisite if you are replicating clusters using Hyper-V replica. It acts the point of contact for any replication requests, and can query into the associated cluster database to decide which node is the correct one to redirect VM specific events such as Live Migration requests etc. The broker also handles authentication requests on behalf of the VMs. A new node can be added or removed from a cluster at any point, without the need to reconfigure the replication as the communication between the primary and recovery clusters is directed to the respective brokers.

Virtual Machine Replication Coordinator (vmclusres.dll):  Coordinator comes into picture when we use the concept of “collection” in Hyper-V replica . This was introduced in Windows Server 2016 and is a prerequisite if you are using few of the latest features for eg: shared virtual hard disks. When VMs are replicated as part of a collection, the replication broker coordinates actions/ events that affect VM group – for eg:, to take  a point in time snapshot which is app consistent,  applying the replication settings,  modifying the interval for replication etc. and propagating the change across all the VMs in the collection.

WINS Service (clnetres.dll): The WINS Service resource type supports the Windows Internet Name Service (WINS) as a cluster resource. There can be only one instance of a resource of this type in the cluster; in other words, a cluster can support only one WINS Service. Windows Internet Name Service (WINS) is a legacy computer name registration and resolution service that maps computer NetBIOS names to IP addresses.

 

Windows Server 2016 only

Cross Cluster Dependency Orchestrator (clusres.dll): This is a resource type that you can ignore and does not do anything. This was to be a new feature to be introduced. However, it never came to fruition, but the resource type was not removed. It is removed in Windows Server 2019.

 

Windows Server 2019 only

SDDC Management (sddcres.dll): SDDC Management is installed when the cluster is enabled for Storage Spaces Direct. It is the management API that Windows Admin Center uses to connect/manage your Storage Spaces Direct. It is an in-box resource type with Windows Server 2019 and is a download and manual addition to Windows Server 2016. For information regarding this, please refer to the Manage Hyper-Converged Infrastructure with Windows Admin Center document.

Scaleout Worker (scaleout.dll):   This is used for Cluster Sets.  In a Cluster Set deployment, the CS-Master interacts with a new cluster resource on the member Clusters called “Cluster Set Worker” (CS-Worker). CS-Worker acts as the only liaison on the cluster to orchestrate the local cluster interactions as requested by the CS-Master. Examples of such interactions include VM placement and cluster-local resource inventorying. There is only one CS-Worker instance for each of the member clusters in a Cluster Set.

Scaleout Master (scaleout.dll): This is used when In a Cluster Set, the communication between the member clusters is loosely coupled, and is coordinated by a new cluster resource called “Cluster Set Master” (CS-Master). Like any other cluster resource, CS-Master is highly available and resilient to individual member cluster failures and/or the management cluster node failures. Through a new Cluster Set WMI provider, CS-Master provides the management endpoint for all Cluster Set manageability interactions.

Infrastructure File Server (clusres.dll): In hyper-converged configurations, an Infrastructure SOFS allows an SMB client (Hyper-V host) to communicate with guaranteed Continuous Availability (CA) to the Infrastructure SOFS SMB server. This hyper-converged SMB loopback CA is achieved via VMs accessing their virtual disk (VHDx) files where the owning VM identity is forwarded between the client and server. This identity forwarding allows ACL-ing VHDx files just as in standard hyper-converged cluster configurations as before. There can be at most only one Infrastructure SOFS cluster role on a Failover Cluster. Each CSV volume created in the failover automatically triggers the creation of an SMB Share with an auto-generated name based on the CSV volume name. An administrator cannot directly create or modify SMB shares under an Infra SOFS role, other than via CSV volume create/modify operations. This role is commonly used with Cluster Sets.

 

Thanks
John Marlin
Senior Program Manager
High Availability and Storage

Follow me on Twitter @JohnMarlin_MSFT


Parsing Text with PowerShell (1/3)

$
0
0

This is the first post in a three part series.

  • Part 1:
    • Useful methods on the String class
    • Introduction to Regular Expressions
    • The Select-String cmdlet
  • Part 2:
    • The -split operator
    • The -match operator
    • The switch statement
    • The Regex class
  • Part 3:
    • A real world, complete and slightly bigger, example of a switch-based parser

A task that appears regularly in my workflow is text parsing. It may be about getting a token from a single line of text or about turning the text output of native tools into structured objects so I can leverage the power of PowerShell.

I always strive to create structure as early as I can in the pipeline, so that later on I can reason about the content as properties on objects instead of as text at some offset in a string. This also helps with sorting, since the properties can have their correct type, so that numbers, dates etc. are sorted as such and not as text.

There are a number of options available to a PowerShell user, and I’m giving an overview here of the most common ones.

This is not a text about how to create a high performance parser for a language with a structured EBNF grammar. There are better tools available for that, for example ANTLR.

.Net methods on the string class

Any treatment of string parsing in PowerShell would be incomplete if it didn’t mention the methods on the string class.
There are a few methods that I’m using more often than others when parsing strings:

Name Description
Substring(int startIndex) Retrieves a substring from this instance. The substring starts at a specified character position and continues to the end of the string.
Substring(int startIndex, int length) Retrieves a substring from this instance. The substring starts at a specified character position and has a specified length.
IndexOf(string value) Reports the zero-based index of the first occurrence of the specified string in this instance.
IndexOf(string value, int startIndex) Reports the zero-based index of the first occurrence of the specified string in this instance. The search starts at a specified character position.
LastIndexOf(string value) Reports the zero-based index of the first occurrence of the specified string in this instance. Often used together with Substring.
LastIndexOf(string value, int startIndex) Reports the zero-based index position of the last occurrence of a specified string within this instance. The search starts at a specified character position and proceeds backward toward the beginning of the string.

This is a minor subset of the available functions. It may be well worth your time to read up on the string class since it is so fundamental in PowerShell.
Docs are found here.

As an example, this can be useful when we have very large input data of comma-separated input with 15 columns and we are only interested in the third column from the end. If we were to use the -split ',' operator, we would create 15 new strings and an array for each line. On the other hand, using LastIndexOf on the input string a few times and then SubString to get the value of interest is faster and results in just one new string.

function parseThirdFromEnd([string]$line){
    $i = $line.LastIndexOf(",")             # get the last separator
    $i = $line.LastIndexOf(",", $i - 1)     # get the second to last separator, also the end of the column we are interested in
    $j = $line.LastIndexOf(",", $i - 1)     # get the separator before the column we want
    $j++                                    # more forward past the separator
    $line.SubString($j,$i-$j)               # get the text of the column we are looking for
}

In this sample, I ignore that the IndexOf and LastIndexOf returns -1 if they cannot find the text to search for. From experience, I also know that it is easy to mess up the index arithmetics.
So while using these methods can improve performance, it is also more error prone and a lot more to type. I would only resort to this when I know the input data is very large and performance is an issue. So this is not a recommendation, or a starting point, but something to resort to.

On rare occasions, I write the whole parser in C#. An example of this is in a module wrapping the Perforce version control system, where the command line tool can output python dictionaries. It is a binary format, and the use case was complicated enough that I was more comfortable with a compiler checked implementation language.

Regular Expressions

Almost all of the parsing options in PowerShell make use of regular expressions, so I will start with a short intro of some regular expression concepts that are used later in these posts.

Regular expressions are very useful to know when writing simple parsers since they allow us to express patterns of interest and to capture text that matches those patterns.

It is a very rich language, but you can get quite a long way by learning a few key parts. I’ve found regular-expressions.info to be a good online resource for more information. It is not written directly for the .net regex implementation, but most of the information is valid across the different implementations.

Regex Description
* Zero or more of the preceding character. a* matches the empty string, a, aa, etc, but not b.
+ One or more of the preceding character. a+ matches a, aa, etc, but not the empty string or b.
. Matches any character
[ax1] Any of a,x,1
a-d matches any of a,b,c,d
w The w meta character is used to find a word character. A word character is a character from a-z, A-Z, 0-9, including the _ (underscore) character. It also matches variants of the characters such as ??? and ???.
W The inversion of w. Matches any non-word character
s The s meta character is used to find white space
S The inversion of s. Matches any non-whitespace character
d Matches digits
D The inversion of d. Matches non-digits
b Matches a word boundary, that is, the position between a word and a space.
B The inversion of b. . erB matches the er in verb but not the er in never.
^ The beginning of a line
$ The end of a line
(<expr>) Capture groups

Combining these, we can create a pattern like below to match a text like:

Text Pattern
" 42,Answer" ^s+d+,.+

The above pattern can be written like this using the x (ignore pattern whitespace) modifier.

Starting the regex with (?x) ignores whitespace in the pattern (it has to be specified explicitly, with s) and also enables the comment character #.

(?x)  # this regex ignores whitespace in the pattern. Makes it possible do document a regex with comments.
^     # the start of the line
s+   # one or more whitespace character
d+   # one or more digits
,     # a comma
.+    # one or more characters of any kind

By using capture groups, we make it possible to refer back to specific parts of a matched expression.

Text Pattern
" 42,Answer" ^s+(d+),(.+)
(?x)  # this regex ignores whitespace in the pattern. Makes it possible to document a regex with comments.
^     # the start of the line
s+   # one or more whitespace character
(d+) # capture one or more digits in the first group (index 1)
,     # a comma
(.+)  # capture one or more characters of any kind in the second group (index 2)

Naming regular expression groups

There is a construct called named capturing groups, (?<group_name>pattern), that will create a capture group with a designated name.

The regex above can be rewritten like this, which allows us to refer to the capture groups by name instead of by index.

^s+(?<num>d+),(?<text>.+)

Different languages have implementation specific solutions to accessing the values of the captured groups. We will see later on in this series how it is done in PowerShell.

The Select-String cmdlet

The Select-String command is a work horse, and is very powerful when you understand the output it produces.
I use it mainly when searching for text in files, but occasionally also when looking for something in command output and similar.

The key to being efficient with Select-String is to know how to get to the matched patterns in the output. In its internals, it uses the same regex class as the -match and -split operator, but instead of populating a global variable with the resulting groups, as -match does, it writes an object to the pipeline, with a Matches property that contains the results of the match.

Set-Content twitterData.txt -value @"
Lee, Steve-@Steve_MSFT,2992
Lee Holmes-13000 @Lee_Holmes
Staffan Gustafsson-463 @StaffanGson
Tribbiani, Joey-@Matt_LeBlanc,463400
"@

# extracting captured groups
Get-ChildItem twitterData.txt |
    Select-String -Pattern "^(w+) ([^-]+)-(d+) (@w+)" |
    Foreach-Object {
        $first, $last, $followers, $handle = $_.Matches[0].Groups[1..4].Value   # this is a common way of getting the groups of a call to select-string
        [PSCustomObject] @{
            FirstName = $first
            LastName = $last
            Handle = $handle
            TwitterFollowers = [int] $followers
        }
    }
FirstName LastName   Handle       TwitterFollowers
--------- --------   ------       ----------------
Lee       Holmes     @Lee_Holmes             13000
Staffan   Gustafsson @StaffanGson              463

Support for Multiple Patterns

As we can see above, only half of the data matched the pattern to Select-String.

A technique that I find useful is to take advantage of the fact that Select-String supports the use of multiple patterns.

The lines of input data in twitterData.txt contain the same type of information, but they’re formatted in slightly different ways.
Using multiple patterns in combination with named capture groups makes it a breeze to extract the groups even when the positions of the groups differ.

$firstLastPattern = "^(?<first>w+) (?<last>[^-]+)-(?<followers>d+) (?<handle>@.+)"
$lastFirstPattern = "^(?<last>[^s,]+),s+(?<first>[^-]+)-(?<handle>@[^,]+),(?<followers>d+)"
Get-ChildItem twitterData.txt |
     Select-String -Pattern $firstLastPattern, $lastFirstPattern |
    Foreach-Object {
        # here we access the groups by name instead of by index
        $first, $last, $followers, $handle = $_.Matches[0].Groups['first', 'last', 'followers', 'handle'].Value
        [PSCustomObject] @{
            FirstName = $first
            LastName = $last
            Handle = $handle
            TwitterFollowers = [int] $followers
        }
    }
FirstName LastName   Handle        TwitterFollowers
--------- --------   ------        ----------------
Steve     Lee        @Steve_MSFT               2992
Lee       Holmes     @Lee_Holmes              13000
Staffan   Gustafsson @StaffanGson               463
Joey      Tribbiani  @Matt_LeBlanc           463400

Breaking down the $firstLastPattern gives us

(?x)                # this regex ignores whitespace in the pattern. Makes it possible do document a regex with comments.
^                   # the start of the line
(?<first>w+)       # capture one or more of any word characters into a group named 'first'
s                  # a space
(?<last>[^-]+)      # capture one of more of any characters but `-` into a group named 'last'
-                   # a '-'
(?<followers>d+)   # capture 1 or more digits into a group named 'followers'
s                  # a space
(?<handle>@.+)      # capture a '@' followed by one or more characters into a group named 'handle'

The second regex is similar, but with the groups in different order. But since we retrieve the groups by name, we don’t have to care about the positions of the capture groups, and multiple assignment works fine.

Context around Matches

Select-String also has a Context parameter which accepts an array of one or two numbers specifying the number of lines before and after a match that should be captured. All text parsing techniques in this post can be used to parse information from the context lines.
The result object has a Context property, that returns an object with PreContext and PostContext properties, both of the type string[].

This can be used to get the second line before a match:

# using the context property
Get-ChildItem twitterData.txt |
    Select-String -Pattern "Staffan" -Context 2,1 |
    Foreach-Object { $_.Context.PreContext[1], $_.Context.PostContext[0] }
Lee Holmes-13000 @Lee_Holmes
Tribbiani, Joey-@Matt_LeBlanc,463400

To understand the indexing of the Pre- and PostContext arrays, consider the following:

Lee, Steve-@Steve_MSFT,2992                  <- PreContext[0]
Lee Holmes-13000 @Lee_Holmes                 <- PreContext[1]
Staffan Gustafsson-463 @StaffanGson          <- Pattern matched this line
Tribbiani, Joey-@Matt_LeBlanc,463400         <- PostContext[0]

The pipeline support of Select-String makes it different from the other parsing tools available in PowerShell, and makes it the undisputed king of one-liners.

I would like stress how much more useful Select-String becomes once you understand how to get to the parts of the matches.

Summary

We have looked at useful methods of the string class, especially how to use Substring to get to text at a specific offset. We also looked at regular expression, a language used to describe patterns in text, and on the Select-String cmdlet, which makes heavy use of regular expression.

Next time, we will look at the operators -split and -match, the switch statement (which is surprisingly useful for text parsing), and the regex class.

Staffan Gustafsson, @StaffanGson, github

Thanks to Jason Shirk, Mathias Jessen and Steve Lee for reviews and feedback.

Infrastructure + Security: Noteworthy News (January, 2019)

$
0
0

Hi there! Stanislav Belov here, and you are reading the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis.

Microsoft Azure
Azure Backup for virtual machines behind an Azure Firewall
This blog post primarily talks about how Azure Firewall and Azure Backup can be leveraged to provide comprehensive protection to your data. The former protects your network, while the latter backs up your data to the cloud. Azure Firewall, now generally available, is a cloud-based network security service that protects your Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. With Azure Firewall you can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks. It uses a static public IP address for your virtual network resources, allowing outside firewalls to identify traffic originating from your virtual network.
AZ-900: Microsoft Azure Fundamentals Now Available
This exam is designed for candidates looking to demonstrate foundational knowledge of cloud services and how those services are implemented with Microsoft Azure. The exam is intended for those of you who don’t have a technical background but have an interest in the cloud, such as those involved in selling or purchasing cloud-based solutions and services; those you with a technical background who have a need to validate your foundational level knowledge around cloud services; and those of you who are simply interested in exploring the world of cloud-based solutions to determine if it’s the building block you need to change your career.
Windows Server
Windows Admin Center 1809.5 is now generally available!

Windows Admin Center version 1809.5 is a cumulative update to our 1809 GA release in September which includes various quality and functional improvements and bug fixes throughout the product.

Windows Client
Windows 10 Tip: Microsoft Forms

Your family has a weekend free and you’re trying to figure out what to do during that precious time together. But instead of asking each person individually or using social media, it might be easier – and more efficient – to gauge the group’s preferences through a survey or a quiz. Now you can easily create either or both with Microsoft Forms, a simple, lightweight tool.

Introducing new advanced security and compliance offerings for Microsoft 365

When we first introduced Microsoft 365 bringing together Office 365, Windows 10, and Enterprise Mobility + Security (EMS), our vision was two-fold: 1) deliver a great experience for customers to empower employee creativity and teamwork, and 2) provide the most secure and easy to manage platform for a modern workplace. We’ve been thrilled with the response, as customers like BP, Gap, Walmart, and Lilly have contributed to triple-digit seat growth since its launch.

Security
Automating Security workflows with Microsoft’s CASB and MS Flow

As Cloud Security is becoming an increasingly greater concern for organizations of all sizes, the role and importance of Security Operations Centers (SOC) continues to expand. While end users leverage new cloud apps and services daily, Security professionals that keep track of security incidents remain a scarce resource. Consequently, SOC teams are looking for solutions that help automate processes where possible, to reduce the number of incidents that require their direct oversight and interaction.

Active Directory Kill Chain Attack & Defense
This document was designed to be a useful, informational asset for those looking to understand the specific tactics, techniques, and procedures (TTPs) attackers are leveraging to compromise active directory and guidance to mitigation, detection, and prevention. And understand Active Directory Kill Chain Attack and Modern Post Exploitation Adversary Tradecraft Activity.
Windows Defender ATP has protections for USB and removable devices
Knowing that removable device usage is a concern for enterprise customers in both of these types of scenarios we’ve worked on how removable devices can be protected with Windows Defender Advanced Threat Protection (Windows Defender ATP).
Step 3. Protect your identities: top 10 actions to secure your environment
The “Top 10 actions to secure your environment” series outlines fundamental steps you can take with your investment in Microsoft 365 security solutions. In “Step 3. Protect your identities,” you’ll learn how to define security policies to protect individual user identities against account compromise and protect your administrative accounts.
Improve your regulatory compliance
Azure Security Center helps streamline the process for meeting regulatory compliance requirements, using the Regulatory compliance dashboard. In the dashboard, Security Center provides insights into your compliance posture based on continuous assessments of your Azure environment. The assessments performed by Security Center analyze risk factors in your hybrid cloud environment in accordance with security best practices.
Vulnerabilities and Updates
Windows Security change affecting PowerShell

The recent (1/8/2019) Windows security patch CVE-2019-0543, has introduced a breaking change for a PowerShell remoting scenario. It is a narrowly scoped scenario that should have low impact for most users. The breaking change only affects local loopback remoting, which is a PowerShell remote connection made back to the same machine, while using non-Administrator credentials.

DSC Resource Kit Release January 2019

We recently released the DSC Resource Kit! This release includes updates to 14 DSC resource modules.

Support Lifecycle
Windows 7 support will end on January 14, 2020

Microsoft made a commitment to provide 10 years of product support for Windows 7 when it was released on October 22, 2009. When this 10-year period ends, Microsoft will discontinue Windows 7 support so that we can focus our investment on supporting newer technologies and great new experiences. The specific end of support day for Windows 7 will be January 14, 2020. After that, technical assistance and automatic updates that help protect your PC will no longer be made available for the product. Microsoft strongly recommends that you move to Windows 10 sometime before January 2020 to avoid a situation where you need service or support that is no longer available.

Extended Security Updates for SQL Server and Windows Server 2008/2008 R2: Frequently Asked Questions (PDF)

On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. That means the end of regular security updates. Don’t let your infrastructure and applications go unprotected. We’re here to help you migrate to current versions for greater security, performance and innovation.

Products reaching End of Support for 2018

Products reaching End of Support for 2019

Products reaching End of Support for 2020

Microsoft Premier Support News
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

Announcing the PowerShell Preview Extension in VSCode

$
0
0

Preview builds of the PowerShell extension are now available in VSCode

We are excited to announce the PowerShell Preview extension in the VSCode marketplace!
The PowerShell Preview extension allows users on Windows PowerShell 5.1, Powershell 6.0, and all newer versions to get and test the latest updates to the PowerShell extension and comes with some exciting features. The PowerShell Preview extension is a substitute for the PowerShell extension so both the PowerShell extension and the PowerShell Preview
extension should not be enabled at the same time.

Features of the PowerShell Preview extension

The PowerShell Preview extension is built on .NET Standard thereby enabling simplification of our code and dependency structure.

The PowerShell Preview extension also contains PSReadLine support in the integrated console for Windows behind a
feature flag. PSReadLine provides a consistent and rich interactive experience, including syntax coloring and
multi-line editing and history, in the PowerShell console, in Cloud Shell, and now in VSCode terminal.
For more information on the benefits of PSReadLine, check out their documentation.

To enable PSReadLine support in the Preview version on Windows, please add the following to your user settings:

"powershell.developer.featureFlags": [ "PSReadLine" ]

HUGE thanks to @SeeminglyScience for all his amazing work getting PSReadLine working in PowerShell Editor Services!

Why we created the PowerShell Preview extension

By having a preview channel, which supports Windows Powershell 5.1 and PowerShell Core 6, in addition to our existing stable channel, we can get new features out faster. PSReadLine support for the VSCode integrated console is a great
example of a feature that the preview build makes possible. Having a preview channel also allows us to get more feedback
on new features and to iterate on changes before they arrive in our stable channel.

How to Get/Use the PowerShell Preview extension

If you dont already have VSCode, start here.

Once you have VSCode open, click Clt+Shift+X to open the extensions marketplace.
Next, type PowerShell Preview in the search bar.
Click Install on the PowerShell Preview page.
Finally, click Reload in order to refresh VSCode.

If you already have the PowerShell extension please disable it to use the Powershell Preview extension.
To disable the PowerShell extension find it in the Extensions sidebar view, specifically under the list of Enabled extensions, Right-click on the PowerShell extension and select Disable. Please note that it is important to only have either the
PowerShell extension or the PowerShell Preview extension endabled at one time.

Breaking Changes

As stated above, this version of the PowerShell extension only works with Windows PowerShell versions 5.1 and
PowerShell Core 6.

Reporting Feedback

An important benefit of a preview extension is getting feedback from users.
To report issues with the extension use our GitHub repo.
When reporting issues be sure to specify the version of the extension you are using.

Sydney Smith
Program Manager
PowerShell Team

The PowerShell-Docs repositories have been moved

$
0
0

The PowerShell-Docs repositories have been moved from the PowerShell organization to the MicrosoftDocs organization in GitHub.

The tools we use to build the documentation are designed to work in the MicrosoftDocs org. Moving the repository lets us build the foundation for future improvements in our documentation experience.

Impact of the move

During the move there may be some downtime. The affected repositories will be inaccessible during
the move process. Also, the documentation processes will be paused. After the move, we need to test
access permissions and automation scripts.
 

After these tasks are complete, access and operations will return to normal. GitHub automatically
redirects requests to the old repo URL to the new location.
 

For more information about transferring repositories in GitHub, see About repository transfers

If the transferred repository has any forks, then those forks will remain associated with the
repository after the transfer is complete.
  • All Git information about commits, including contributions, are preserved.
  • All of the issues and pull requests remain intact when transferring a repository.
  • All links to the previous repository location are automatically redirected to the new location.


When you use git clone, git fetch, or git push on a transferred repository, these commands will r
edirect to the new repository location or URL.

However, to avoid confusion, we strongly recommend updating any existing local clones to point to
the new repository URL.
For more information, see Changing a remote’s URL.

The following example shows how to change the “upstream” remote to point to the new location:

[Wed 06:08PM] [staging =]
PS C:GitPS-DocsPowerShell-Docs> git remote -v
origin  https://github.com/sdwheeler/PowerShell-Docs.git (fetch)
origin  https://github.com/sdwheeler/PowerShell-Docs.git (push)
upstream        https://github.com/PowerShell/PowerShell-Docs.git (fetch)
upstream        https://github.com/PowerShell/PowerShell-Docs.git (push)

[Wed 06:09PM] [staging =]
PS C:GitPS-DocsPowerShell-Docs> git remote set-url upstream https://github.com/MicrosoftDocs/PowerShell-Docs.git

[Wed 06:10PM] [staging =]
PS C:GitPS-DocsPowerShell-Docs> git remote -v
origin  https://github.com/sdwheeler/PowerShell-Docs.git (fetch)
origin  https://github.com/sdwheeler/PowerShell-Docs.git (push)
upstream        https://github.com/MicrosoftDocs/PowerShell-Docs.git (fetch)
upstream        https://github.com/MicrosoftDocs/PowerShell-Docs.git (push)


Which repositories were moved?

 

The following repositories were transferred:

  • PowerShell/PowerShell-Docs
  • PowerShell/powerShell-Docs.cs-cz
  • PowerShell/powerShell-Docs.de-de
  • PowerShell/powerShell-Docs.es-es
  • PowerShell/powerShell-Docs.fr-fr
  • PowerShell/powerShell-Docs.hu-hu
  • PowerShell/powerShell-Docs.it-it
  • PowerShell/powerShell-Docs.ja-jp
  • PowerShell/powerShell-Docs.ko-kr
  • PowerShell/powerShell-Docs.nl-nl
  • PowerShell/powerShell-Docs.pl-pl
  • PowerShell/powerShell-Docs.pt-br
  • PowerShell/powerShell-Docs.pt-pt
  • PowerShell/powerShell-Docs.ru-ru
  • PowerShell/powerShell-Docs.sv-se
  • PowerShell/powerShell-Docs.tr-tr
  • PowerShell/powerShell-Docs.zh-cn
  • PowerShell/powerShell-Docs.zh-tw

Call to action

If you have a fork that you cloned, change your remote configuration to point to the new upstream URL.
Help us make the documentation better.
  • Submit issues when you find a problem in the docs.
  • Suggest fixes to documentation by submitting changes through the PR process.
Sean Wheeler
Senior Content Developer for PowerShell
https://github.com/sdwheeler

Parsing Text with PowerShell (2/3)

$
0
0

This is the second post in a three-part series.

  • Part 1:
    • Useful methods on the String class
    • Introduction to Regular Expressions
    • The Select-String cmdlet
  • Part 2:
    • the -split operator
    • the -match operator
    • the switch statement
    • the Regex class
  • Part 3:
    • a real world, complete and slightly bigger, example of a switch-based parser

The -split operator

The -split operator splits one or more strings into substrings.

A common pattern is a name-value pattern:
Note the usage of the Max-substrings parameter to the -split operator.
We want to ensure that is doesn’t matter if the value contains the character to split on.

$text = "Description=The '=' character is used for assigning values to a variable"
$name, $value = $text -split "=", 2

@"
Name  =  $name
Value =  $value
"@
Name  =  Description
Value =  The '=' character is used for assigning values to a variable

When the line to parse contains fields separated by a well known separator, that is never a part of the field values, we can use the -split operator
in combination with multiple assignment to get the fields into variables.

$name, $location, $occupation = "Spider Man,New York,Super Hero" -split ','

If only the location is of interest, the unwanted items can be assigned to $null.

$null, $location, $null = "Spider Man,New York,Super Hero" -split ','

$location
New York

If there are many fields, assigning to null doesn’t scale well. Indexing can be used instead, to get the fields of interest.

$inputText = "x,Staffan,x,x,x,x,x,x,x,x,x,x,Stockholm,x,x,x,x,x,x,x,x,11,x,x,x,x"
$name, $location, $age = ($inputText -split ',')[1,12,21]

$name
$location
$age
Staffan
Stockholm
11

It is almost always a good idea to create an object that gives context to the different parts.

$inputText = "x,Steve,x,x,x,x,x,x,x,x,x,x,Seattle,x,x,x,x,x,x,x,x,22,x,x,x,x"
$name, $location, $age = ($inputText -split ',')[1,12,21]
[PSCustomObject] @{
    Name = $name
    Location = $location
    Age = [int] $age
}
Name  Location Age
----  -------- ---
Steve Seattle   22

Instead of creating a PSCustomObject, we can create a class. It’s a bit more to type, but we can get more help from the engine, for example with tab completion.

The example below also shows an example of type conversion, where the default string to number conversion doesn’t work.
The age field is handled by PowerShell’s built-in type conversion. It is of type [int], and PowerShell will handle the conversion from string to int,
but in some cases we need to help out a bit. The ShoeSize field is also an [int], but the data is hexadecimal,
and without the hex specifier (‘0x’), this conversion fails for some values, and provides incorrect results for the others.

class PowerSheller {
    [string] $Name
    [string] $Location
    [int] $Age
    [int] $ShoeSize
}

$inputText = "x,Staffan,x,x,x,x,x,x,x,x,x,x,Stockholm,x,x,x,x,x,x,x,x,33,x,11d,x,x"
$name, $location, $age, $shoeSize = ($inputText -split ',')[1,12,21,23]
[PowerSheller] @{
    Name = $name
    Location = $location
    Age = $age
    # ShoeSize is expressed in hex, with no '0x' because reasons :)
    # And yes, it's in millimeters.
    ShoeSize = [Convert]::ToInt32($shoeSize, 16)
}
Name    Location  Age ShoeSize
----    --------  --- --------
Staffan Stockholm  33      285

The split operator’s first argument is actually a regex (by default, can be changed with options).
I use this on long command lines in log files (like those given to compilers) where there can be hundreds of options specified. This makes it hard to see if a certain option is specified or not, but when split into their own lines, it becomes trivial.
The pattern below uses a positive lookahead assertion.
It can be very useful to make patterns match only in a given context, like if they are, or are not, preceded or followed by another pattern.

$cmdline = "cl.exe /D Bar=1 /I SomePath /D Foo  /O2 /I SomeOtherPath /Debug a1.cpp a3.cpp a2.cpp"

$cmdline -split "s+(?=[-/])"
cl.exe
/D Bar=1
/I SomePath
/D Foo
/O2
/I SomeOtherPath
/Debug a1.cpp a2.cpp

Breaking down the regex, by rewriting it with the x option:

(?x)      # ignore whitespace in the pattern, and enable comments after '#'
s+       # one or more spaces
(?=[-/])  # only match the previous spaces if they are followed by any of '-' or '/'.

Splitting with a scriptblock

The -split operator also comes in another form, where you can pass it a scriptblock instead of a regular expression.
This allows for more complicated logic, that can be hard or impossible to express as a regular expression.

The scriptblock accepts two parameters, the text to split and the current index. $_ is bound to the character at the current index.

function SplitWhitespaceInMiddleOfText {
    param(
        [string]$Text,
        [int] $Index
    )
    if ($Index -lt 10 -or $Index -gt 40){
        return $false
    }
    $_ -match 's'
}

$inputText = "Some text that only needs splitting in the middle of the text"
$inputText -split $function:SplitWhitespaceInMiddleOfText
Some text that
only
needs
splitting
in
the middle of the text

The $function:SplitWhitespaceInMiddleOfText syntax is a way to get to content (the scriptblock that implements it) of the function, just as $env:UserName gets the content of an item in the env: drive.
It provides a way to document and/or reuse the scriptblock.

The -match operator

The -match operator works in conjunction with the $matches automatic variable.
Each time a -match or a -notmatch succeeds, the $matches variable is populated so that each capture group gets its own entry.
If the capture group is named, the key will be the name of the group, otherwise it will be the index.

As an example:

if ('a b c' -match '(w) (?<named>w) (w)'){
    $matches
}
Name                           Value
----                           -----
named                          b
2                              c
1                              a
0                              a b c

Notice that the indices only increase on groups without names. I.E. the indices of later groups change when a group is named.

Armed with the regex knowledge from the earlier post, we can write the following:

PS> "    10,Some text" -match '^s+(d+),(.+)'
True
PS> $matches
Name                           Value
----                           -----
2                              Some text
1                              10
0                              10,Some text

or with named groups

PS> "    10,Some text" -match '^s+(?<num>d+),(?<text>.+)'
True
PS> $matches
Name                           Value
----                           -----
num                            10
text                           Some text
0                              10,Some text

The important thing here is that the parts of the pattern that we want to extract has parenthesis around them.
That is what creates the capture groups that allow us to reference those parts of the matching text, either by name or by index.

Combining this into a function makes it easy to use:

function ParseMyString($text){
    if ($text -match '^s+(d+),(.+)') {
        [PSCustomObject] @{
            Number = [int] $matches[1]
            Text    = $matches[2]
        }
    }
    else {
        Write-Warning "ParseMyString: Input `$text` doesn't match pattern"
    }
}

ParseMyString "    10,Some text"
Number  Text
------- ----
     10 Some text

Notice the type conversion when assigning the Number property. As long as the number is in range of an integer, this will always succeed, since we have made a successful match in the if statement above. ([long] or [bigint] could be used. In this case I provide the input, and I have promised myself to stick to a range that fits in a 32-bit integer.)
Now we will be able to sort or do numerical operations on the Number property, and it will behave like we want it to – as a number, not as a string.

The switch statement

Now we’re at the big guns 🙂

The switch statement in PowerShell has been given special functionality for parsing text.
It has two flags that are useful for parsing text and files with text in them. -regex and -file.

When specifying -regex, the match clauses that are strings are treated as regular expressions. The switch statement also sets the $matches automatic variable.

When specifying -file, PowerShell treats the input as a file name, to read input from, rather than as a value statement.

Note the use of a ScriptBlock instead of a string as the match clause to determine if we should skip preamble lines.

class ParsedOutput {
    [int] $Number
    [string] $Text

    [string] ToString() { return "{0} ({1})" -f $this.Text, $this.Number }
}

$inputData =
    "Preamble line",
    "LastLineOfPreamble",
    "    10,Some Text",
    "    Some other text,20"

$inPreamble = $true
switch -regex ($inputData) {

    {$inPreamble -and $_ -eq 'LastLineOfPreamble'} { $inPreamble = $false; continue }

    "^s+(?<num>d+),(?<text>.+)" {  # this matches the first line of non-preamble input
        [ParsedOutput] @{
            Number = $matches.num
            Text = $matches.text
        }
        continue
    }

    "^s+(?<text>[^,]+),(?<num>d+)" { # this matches the second line of non-preamble input
        [ParsedOutput] @{
            Number = $matches.num
            Text = $matches.text
        }
        continue
    }
}
Number  Text
------ ----
    10 Some Text
    20 Some other text

The pattern [^,]+ in the text group in the code above is useful. It means match anything that is not a comma ,. We are using the any-of construct [], and within those brackets, ^ changes meaning from the beginning of the line to anything but.

That is useful when we are matching delimited fields. A requirement is that the delimiter cannot be part of the set of allowed field values.

The regex class

regex is a type accelerator for System.Text.RegularExpressions.Regex. It can be useful when porting code from C#, and sometimes when we want to get more control in situations when we have many matches of a capture group. It also allows us to pre-create the regular expressions which can matter in performance sensitive scenarios, and to specify a timeout.

One instance where the regex class is needed is when you have multiple captures of a group.

Consider the following:

Text Pattern
a,b,c, (w,)+

If the match operator is used, $matches will contain

Name                           Value
----                           -----
1                              c,
0                              a,b,c,

The pattern matched three times, for a,, b, and c,. However, only the last match is preserved in the $matches dictionary.
However, the following will allow us to get to all the captures of the group:

[regex]::match('a,b,c,', '(w,)+').Groups[1].Captures
Index Length Value
----- ------ -----
    0      2 a,
    2      2 b,
    4      2 c,

Below is an example that uses the members of the Regex class to parse input data

class ParsedOutput {
    [int] $Number
    [string] $Text

    [string] ToString() { return "{0} ({1})" -f $this.Text, $this.Number }
}

$inputData =
    "    10,Some Text",
    "    Some other text,20"  # this text will not match

[regex] $pattern = "^s+(d+),(.+)"

foreach($d in $inputData){
    $match = $pattern.Match($d)
    if ($match.Success){
        $number, $text = $match.Groups[1,2].Value
        [ParsedOutput] @{
            Number = $number
            Text = $text
        }
    }
    else {
        Write-Warning "regex: '$d' did not match pattern '$pattern'"
    }
}
WARNING: regex: '    Some other text,20' did not match pattern '^s+(d+),(.+)'
Number Text
------ ----
    10 Some Text

It may surprise you that the warning appears before the output. PowerShell has a quite complex formatting system at the end of the pipeline, which treats pipeline output different than other streams. Among other things, it buffers output in the beginning of a pipeline to calculate sensible column widths. This works well in practice, but sometimes gives strange reordering of output on different streams.

Summary

In this post we have looked at how the -split operator can be used to split a string in parts, how the -match operator can be used to extract different patterns from some text, and how the powerful switch statement can be used to match against multiple patterns.

We ended by looking at how the regex class, which in some cases provides a bit more control, but at the expense of ease of use.

This concludes the second part of this post. Next time, we will look at a complete, real world, example of a switch-based parser.

Thanks to Jason Shirk, Mathias Jessen and Steve Lee for reviews and feedback.

Staffan Gustafsson, @StaffanGson, powercode@github

Staffan works at DICE in Stockholm, Sweden, as a Software Engineer and has been using PowerShell since the first public beta.
He was most seriously pleased when PowerShell was open sourced, and has since contributed bug fixes, new features and performance improvements.
Staffan is a speaker at PSConfEU and is always happy to talk PowerShell.

Simplifying device management for schools with Microsoft Intune and Windows Autopilot

$
0
0

Since launching Intune for Education back in 2017, we have seen some amazing momentum in its adoption. Along the way, our engineering teams have continued to do some great work to simplify device management for schools. We spend a lot of time speaking directly with school IT departments, faculty, and students to better understand how we can build things that will meet the unique needs of the learning process – that means a richer learning experience with better learning outcomes for students and an environment where teachers can teach instead of troubleshoot technical problems.

 

Based on the feedback we’re getting from educators all over the world, I am really proud of the way Intune for Education has developed over the last 12 months. This is a huge win for schools and students everywhere.

 

Today, ConfigMgr manages 10s of millions of PCs in Edu; the benefit of migrating to Intune is that everything is moved to the cloud and there is no need for the maintenance of on-prem infrastructure. For a lot of schools and school districts, this is a huge advantage.

 

We continue to strengthen Intune for Education with support for iOS classroom devices

For many students and teachers, iPads are frequently used in the classroom – it’s common to see iPads in K-2 and then PCs in grades 3-12. Last summer, we updated Intune for Education to support iOS device management so that it would be easier than ever for school IT admins (and even teachers, when necessary) to manage student’s devices from one unified, streamlined console.

We know that initial setup can be daunting for any IT team, so we've worked to simplify the setup of certificates and tokens required to manage iOS devices – and now it’s easy to connect your Apple accounts to Intune for Education. Now the enrollment time of new devices is much faster because you can automatically configure your Device Enrollment Program (DEP) settings and skip all the Setup Assistant screens. Now that you can expand on Intune for Education’s Express configuration for iOS, you can also quickly assign and change apps/settings for different device groups using the same simplified workflows you use to manage your Windows devices.

We’ve also made improvements to Apple VPP support which will enable you to sync your VPP-purchased apps with Intune for Education, as well as assign these apps directly from the Intune for Education dashboard. You’ll also notice that we now display location information for your Apple School Manager VPP tokens so that you can easily identify your VPP tokens from both Intune for Education and Apple School Manager. You can even give your VPP tokens nicknames in Intune for Education for easy labeling and organization.

 

To learn more about this, checkout the “Setup iOS Device Management” documentation.

 

Streamlining provisioning of classroom devices with Intune and Windows Autopilot

Based on what we’ve learned for IT teams working in education, we’ve also found ways to improve the startup experience for students so that they can seamlessly use their devices and access the classroom apps they need.

 

With Windows Autopilot this kind of device deployment at scale is easy. Autopilot builds on existing technologies like Azure Active Directory (AAD) and Intune to manage and configure devices, and then automatically enroll those devices when students first bootup them up.

 

How provisioning with Windows Autopilot works:

 

Autopilot for Edu.png

 

Resetting a device for the next school new year

Another great new feature is that admins can now execute the Autopilot Reset function remotely from Intune for Education – this will wipe all the devices and prepare student PCs for the next school year. This function removes all the apps, settings, and user data but keeps the devices enrolled in Azure AD and Intune. After the reset, these student PCs will receive the latest Intune policies so that they’re ready for the classroom.

 

New settings for Windows 10 devices

To provide more control over areas such as security, Windows updates, device sign-in, and browser experience, we have added several new admin settings, including:

  • Configure preferred Azure Active Directory tenant domain:
    This allows students to sign in to a device without a tenant domain name. Now students can sign in quickly and easily using just their alias.
  • Configure new tab page:
    From Intune you can determine which page opens when students add a tab in Microsoft Edge. These new tabs can open a blank page or a custom one, such as your school's home page.
  • Switch out of S Mode:
    This setting lets admins switch devices out of Windows 10 in S Mode, or it can prevent students from switching their own devices out of S Mode.

New Settings.png

Rename or delete devices from Intune for Education

If a student transfers between classes, or if a device changes ownership during the year, IT can now rename any Windows 10 device (version 1803 or later) remotely from the Intune for Education portal.

 

Once the name has been updated, the device can then be assigned to the correct group through dynamic grouping. Additionally, when a student leaves the school and takes their personal device with them, you can now delete that device from the Intune for Education portal. Deleting a device means unenrolling it from Intune and removing the device record from Azure Active Directory.

Rename and delete device.png

Unlimited Immersive Reader for students

An amazing benefit for students is that with Intune for Education, they get unlimited licenses for the Immersive Reader. Immersive Reader is a learning tool that creates a reading experience with accessibility and comprehensions for learners of all ages and abilities.

 

You can learn more about Immersive Reader here.

 

Simplify troubleshooting with the Device Details page

Finding the resources needed to troubleshoot a deployment issue is critical for any IT team, so we’ve created resources specifically for people working in education. Check out the “Device Details” page to see settings that might be in conflict and learn how to troubleshoot these issues. This page shows all the apps and settings applied to a user/device combination based on group memberships.

 

 

It is really inspiring to hear about the success of our customers and to see the way schools all over the world are simplifying the deployment and management of classroom devices. Here are just a few recent stories:

 

Bridgeport Public Schools

Using Intune to manage school data and devices turned out to be very efficient. Jeff Postolowski, Director of Information Technology for the Bridgeport School District “I can push out a package using the Windows 10 deployment with Intune and they just come right down on the machines and we're good to go, It has simplified the management process.”

 

Immaculate Heart of Mary School
To manage all the school’s devices, Tim Thalheimer, the school’s Director of Technology had one ‘hands-down’ choice: Microsoft Intune. Intune for Education, designed for K-12 school IT departments, is a web interface that allows admins to easily manage a large number of devices. Intune for Education, because of its ease of use, impacted the IT team by saving time and reducing IT admin workloads.

 

Southwest Local School District Using the free Office 365 subscription bundled as part of their school’s device purchase plan, and Microsoft Intune for simplified user control and App management, they quickly configured and deployed 2,600 touchscreen laptops to students in grades 5-12

 

Seattle Preparatory School

Phil Dietrich IT Director “We decided to add a couple of Apps to student devices a month after we deployed Intune for Education,” said Phil. “It was refreshing to just push a new App out to all the student devices using Intune for Education. Intune is wildly efficient.”

 

Freyberg Community School

Moved to Microsoft Intune to centralize device management. Previously, device management was handled via Windows Active Directory Group Policy, which only provided management to devices while they were onsite at the school; moving to Azure AD and Intune means that software updates and configuration policies can happen whenever the device is connected to the internet. Being able to remotely troubleshoot, configure, and provision computing resources has also improved the responsiveness of the school’s IT support. Educators now get the IT resources they need, when they need them, without unnecessary lag time.

Updates to Our Container Tagging Format

$
0
0

We’re introducing several minor changes to the tagging format of Windows containers. These changes will begin taking take effect next Patch Tuesday, February 12, 2019.

Build Number Tags

We’re re-introducing a build tag across all Windows base images. This will complement the existing KB-formatted tag and release channel tags. Here’s an example:

#The equivalent build number for the latest KB4480116, which went live for January Patch Tuesday:
docker pull mcr.microsoft.com/windows/servercore:10.0.17763.253

We released Windows Server 2016 container images with two tags forms. The first was the release channels, ltsc2016 and sac2016. The second was a build number. With the release of Windows Server, version 1709, we moved away from the build number to a Knowledge Base number system. That way, users could search the KB number and find the associated support article to understand what patches were in the image.

Feedback from users indicated that the KB tag alone was not clear enough in communicating which image was in fact newer or whether the container image was an exact match to their host OS version. We believe users will have an easier time distinguishing these things with the reintroduction of the build tag.

Consistent Tags Across all Microsoft Repos

To have consistency across all Microsoft container repos, going forward, all container image tags that were previously published with an underscore will now be published with a hyphen. Example:

#This is how it was before:
docker pull mcr.microsoft.com/windows/servercore:1809_KB4480116_amd64

#This is how it will be going forward:
docker pull mcr.microsoft.com/windows/servercore:1809-KB4480116-amd64

This change will affect all image tags published going forward. Old tags that used an underscore will continue to exist.

We’re also updating our arch-specific ARM images to align with the rest of the Docker community, changing from a tag of ‘arm’ for 32-bit arm to ‘arm32v7’. As an example of this change:

#This is how it was before:
docker pull mcr.microsoft.com/windows/nanoserver:1809_arm

#This is how it will be going forward:
docker pull mcr.microsoft.com/windows/nanoserver:1809-arm32v7

Container Repo Structure

In related news, Docker introduced changes to Docker Hub. Docker Hub is the home to our new repository structure.

Screenshot of Docker Hub showing a Docker Hub search results page with an entry for "Windows base OS images" and the entry description reading "Product family for all Windows base OS container images

The new structure allows us to have a repository family page—the “Windows base OS Images”—displayed above. From there, the family page links to the individual repos for all Windows container base images and points users to related repos. You can read more in a blog post from Rohit Tatachar, PM for Container Registry.

Conclusion

For more information, please visit our container docs at aka.ms/containers. Let us know your thoughts about the new repo structure in the comments below!


Parsing Text with PowerShell (3/3)

$
0
0

This is the third and final post in a three-part series.

  • Part 1:
    • Useful methods on the String class
    • Introduction to Regular Expressions
    • The Select-String cmdlet
  • Part 2:
    • the -split operator
    • the -match operator
    • the switch statement
    • the Regex class
  • Part 3:
    • a real world, complete and slightly bigger, example of a switch-based parser
      • General structure of a switch-based parser
      • The real world example

In the previous posts, we looked at the different operators what are available to us in PowerShell.

When analyzing crashes at DICE, I noticed that some of the C++ runtime binaries where missing debug symbols. They should be available for download from Microsoft’s public symbol server, and most versions were there. However, due to some process errors at DevDiv, some builds were released publicly without available debug symbols.
In some cases, those missing symbols prevented us from debugging those crashes, and in all cases, they triggered my developer OCD.

So, to give actionable feedback to Microsoft, I scripted a debugger (cdb.exe in this case) to give a verbose list of the loaded modules, and parsed the output with PowerShell, which was also later used to group and filter the resulting data set. I sent this data to Microsoft, and 5 days later, the missing symbols were available for download. Mission accomplished!

This post will describe the parser I wrote for this task (it turned out that I had good use for it for other tasks later), and the general structure is applicable to most parsing tasks.

The example will show how a switch-based parser would look when the input data isn’t as tidy as it normally is in examples, but messy – as the real world data often is.

General Structure of a switch Based Parser

Depending on the structure of our input, the code must be organized in slightly different ways.

Input may have a record start that differs by indentation or some distinct token like

Foo                    <- Record start - No whitespace at the beginning of the line
    Prop1=Staffan      <- Properties for the record - starts with whitespace
    Prop3 =ValueN
Bar
    Prop1=Steve
    Prop2=ValueBar2

If the data to be parsed has an explicit start record, it is a bit easier than if it doesn’t have one.
We create a new data object when we get a record start, after writing any previously created object to the pipeline.
At the end, we need to check if we have parsed a record that hasn’t been written to the pipeline.

The general structure of a such a switch-based parser can be as follows:

$inputData = @"
Foo
    Prop1=Value1
    Prop3=Value3
Bar
    Prop1=ValueBar1
    Prop2=ValueBar2
"@ -split 'r?n'   # This regex is useful to split at line endings, with or without carriage return

class SomeDataClass {
    $ID
    $Name
    $Property2
    $Property3
}

# map to project input property names to the properties on our data class
$propertyNameMap = @{
    Prop1 = "Name"
    Prop2 = "Property2"
    Prop3 = "Property3"
}

$currentObject = $null
switch -regex ($inputData) {

    '^(S.*)' {
        # record start pattern, in this case line that doesn't start with a whitespace.
        if ($null -ne $currentObject) {
            $currentObject                   # output to pipeline if we have a previous data object
        }
        $currentObject = [SomeDataClass] @{  # create new object for this record
            Id = $matches.1                  # with Id like Foo or Bar
        }
        continue
    }

    # set the properties on the data object
    '^s+([^=]+)=(.*)' {
        $name, $value = $matches[1, 2]
        # project property names
        $propName = $propertyNameMap[$name]
        if ($propName = $null) {
            $propName = $name
        }
        # assign the parsed value to the projected property name
        $currentObject.$propName = $value
        continue
    }
}

if ($currentObject) {
    # Handle the last object if any
    $currentObject # output to pipeline
}
ID  Name      Property2 Property3
--  ----      --------- ---------
Foo Value1              Value3
Bar ValueBar1 ValueBar2

Alternatively, we may have input where the records are separated by a blank line, but without any obvious record start.

commitId=1234                         <- In this case, a commitId is first in a record
description=Update readme.md
                                      <- the blank line separates records
user=Staffan                          <- For this record, a user property comes first
commitId=1235
description=Fix bug.md

In this case the structure of the code looks a bit different. We create an object at the beginning, but keep track of if it’s dirty or not.
If we get to the end with a dirty object, we must output it.

$inputData = @"

commit=1234
desc=Update readme.md

user=Staffan
commit=1235
desc=Bug fix

"@ -split "r?n"

class SomeDataClass {
    [int] $CommitId
    [string] $Description
    [string] $User
}

# map to project input property names to the properties on our data class
# we only need to provide the ones that are different. 'User' works fine as it is.
$propertyNameMap = @{
    commit = "CommitId"
    desc   = "Description"
}

$currentObject = [SomeDataClass]::new()
$objectDirty = $false
switch -regex ($inputData) {
    # set the properties on the data object
    '^([^=]+)=(.*)$' {
        # parse a name/value
        $name, $value = $matches[1, 2]
        # project property names
        $propName = $propertyNameMap[$name]
        if ($null -eq $propName) {
            $propName = $name
        }
        # assign the projected property
        $currentObject.$propName = $value
        $objectDirty = $true
        continue
    }

    '^s*$' {
        # separator pattern, in this case any blank line
        if ($objectDirty) {
            $currentObject                           # output to pipeline
            $currentObject = [SomeDataClass]::new()  # create new object
            $objectDirty = $false                    # and mark it as not dirty
        }
    }
    default {
        Write-Warning "Unexpected input: '$_'"
    }
}

if ($objectDirty) {
    # Handle the last object if any
    $currentObject # output to pipeline
}
CommitId Description      User
-------- -----------      ----
    1234 Update readme.md
    1235 Bug fix          Staffan

The Real World Example

I have adapted this sample slightly so that I get the loaded modules from a running process instead of from my crash dumps. The format of the output from the debugger is the same.
The following command launches a command line debugger on notepad, with a script that gives a verbose listing of the loaded modules, and quits:

# we need to muck around with the console output encoding to handle the trademark chars
# imagine no encodings
# it's easy if you try
# no code pages below us
# above us only sky
[Console]::OutputEncoding = [System.Text.Encoding]::GetEncoding("iso-8859-1")

$proc = Start-Process notepad -passthru
Start-Sleep -seconds 1
$cdbOutput = cdb -y 'srv*c:symbols*http://msdl.microsoft.com/download/symbols' -c ".reload -f;lmv;q" -p $proc.ProcessID

The output of the command above is attached to the blog post for those who want to follow along but who aren’t running windows or don’t have cdb.exe installed.

The (abbreviated) output looks like this:

Microsoft (R) Windows Debugger Version 10.0.16299.15 AMD64
Copyright (c) Microsoft Corporation. All rights reserved.

*** wait with pending attach

************* Path validation summary **************
Response                         Time (ms)     Location
Deferred                                       srv*c:symbols*http://msdl.microsoft.com/download/symbols
Symbol search path is: srv*c:symbols*http://msdl.microsoft.com/download/symbols
Executable search path is:
ModLoad: 00007ff6`e9da0000 00007ff6`e9de3000   C:Windowssystem32notepad.exe
...
ModLoad: 00007ffe`97d80000 00007ffe`97db1000   C:WINDOWSSYSTEM32ntmarta.dll
(98bc.40a0): Break instruction exception - code 80000003 (first chance)
ntdll!DbgBreakPoint:
00007ffe`9cd53050 cc              int     3
0:007> cdb: Reading initial command '.reload -f;lmv;q'
Reloading current modules
.....................................................
start             end                 module name
00007ff6`e9da0000 00007ff6`e9de3000   notepad    (pdb symbols)          c:symbolsnotepad.pdb2352C62CDF448257FDBDDA4081A8F9081notepad.pdb
    Loaded symbol image file: C:Windowssystem32notepad.exe
    Image path: C:Windowssystem32notepad.exe
    Image name: notepad.exe
    Image was built with /Brepro flag.
    Timestamp:        329A7791 (This is a reproducible build file hash, not a timestamp)
    CheckSum:         0004D15F
    ImageSize:        00043000
    File version:     10.0.17763.1
    Product version:  10.0.17763.1
    File flags:       0 (Mask 3F)
    File OS:          40004 NT Win32
    File type:        1.0 App
    File date:        00000000.00000000
    Translations:     0409.04b0
    CompanyName:      Microsoft Corporation
    ProductName:      Microsoft??? Windows??? Operating System
    InternalName:     Notepad
    OriginalFilename: NOTEPAD.EXE
    ProductVersion:   10.0.17763.1
    FileVersion:      10.0.17763.1 (WinBuild.160101.0800)
    FileDescription:  Notepad
    LegalCopyright:   ??? Microsoft Corporation. All rights reserved.
...
00007ffe`9ccb0000 00007ffe`9ce9d000   ntdll      (pdb symbols)          c:symbolsntdll.pdbB8AD79538F2730FD9BACE36C9F9316A01ntdll.pdb
    Loaded symbol image file: C:WINDOWSSYSTEM32ntdll.dll
    Image path: C:WINDOWSSYSTEM32ntdll.dll
    Image name: ntdll.dll
    Image was built with /Brepro flag.
    Timestamp:        E8B54827 (This is a reproducible build file hash, not a timestamp)
    CheckSum:         001F20D1
    ImageSize:        001ED000
    File version:     10.0.17763.194
    Product version:  10.0.17763.194
    File flags:       0 (Mask 3F)
    File OS:          40004 NT Win32
    File type:        2.0 Dll
    File date:        00000000.00000000
    Translations:     0409.04b0
    CompanyName:      Microsoft Corporation
    ProductName:      Microsoft??? Windows??? Operating System
    InternalName:     ntdll.dll
    OriginalFilename: ntdll.dll
    ProductVersion:   10.0.17763.194
    FileVersion:      10.0.17763.194 (WinBuild.160101.0800)
    FileDescription:  NT Layer DLL
    LegalCopyright:   ??? Microsoft Corporation. All rights reserved.
quit:

The output starts with info that I’m not interested in here. I only want to get the detailed information about the loaded modules. It is not until the line

start             end                 module name

that I care about the output.

Also, at the end there is a line that we need to be aware of:

quit:

that is not part of the module output.

To skip the parts of the debugger output that we don’t care about, we have a boolean flag initially set to true.
If that flag is set, we check if the current line, $_, is the module header in which case we flip the flag.

    $inPreamble = $true
    switch -regex ($cdbOutput) {

        { $inPreamble -and $_ -eq "start             end                 module name" } { $inPreamble = $false; continue }

I have made the parser a separate function that reads its input from the pipeline. This way, I can use the same function to parse module data, regardless of how I got the module data. Maybe it was saved on a file. Or came from a dump, or a live process. It doesn’t matter, since the parser is decoupled from the data retrieval.

After the sample, there is a breakdown of the more complicated regular expressions used, so don’t despair if you don’t understand them at first.
Regular Expressions are notoriously hard to read, so much so that they make Perl look readable in comparison.

# define an class to store the data
class ExecutableModule {
    [string]   $Name
    [string]   $Start
    [string]   $End
    [string]   $SymbolStatus
    [string]   $PdbPath
    [bool]     $Reproducible
    [string]   $ImagePath
    [string]   $ImageName
    [DateTime] $TimeStamp
    [uint32]   $FileHash
    [uint32]   $CheckSum
    [uint32]   $ImageSize
    [version]  $FileVersion
    [version]  $ProductVersion
    [string]   $FileFlags
    [string]   $FileOS
    [string]   $FileType
    [string]   $FileDate
    [string[]] $Translations
    [string]   $CompanyName
    [string]   $ProductName
    [string]   $InternalName
    [string]   $OriginalFilename
    [string]   $ProductVersionStr
    [string]   $FileVersionStr
    [string]   $FileDescription
    [string]   $LegalCopyright
    [string]   $LegalTrademarks
    [string]   $LoadedImageFile
    [string]   $PrivateBuild
    [string]   $Comments
}

<#
.SYNOPSIS Runs a debugger on a program to dump its loaded modules
#>
function Get-ExecutableModuleRawData {
    param ([string] $Program)
    $consoleEncoding = [Console]::OutputEncoding
    [Console]::OutputEncoding = [System.Text.Encoding]::GetEncoding("iso-8859-1")
    try {
        $proc = Start-Process $program -PassThru
        Start-Sleep -Seconds 1  # sleep for a while so modules are loaded
        cdb -y srv*c:symbols*http://msdl.microsoft.com/download/symbols -c ".reload -f;lmv;q" -p $proc.Id
        $proc.Close()
    }
    finally {
        [Console]::OutputEncoding = $consoleEncoding
    }
}

<#
.SYNOPSIS Converts verbose module data from windows debuggers into ExecutableModule objects.
#>
function ConvertTo-ExecutableModule {
    [OutputType([ExecutableModule])]
    param (
        [Parameter(ValueFromPipeline)]
        [string[]] $ModuleRawData
    )
    begin {
        $currentObject = $null
        $preamble = $true
        $propertyNameMap = @{
            'File flags'      = 'FileFlags'
            'File OS'         = 'FileOS'
            'File type'       = 'FileType'
            'File date'       = 'FileDate'
            'File version'    = 'FileVersion'
            'Product version' = 'ProductVersion'
            'Image path'      = 'ImagePath'
            'Image name'      = 'ImageName'
            'FileVersion'     = 'FileVersionStr'
            'ProductVersion'  = 'ProductVersionStr'
        }
    }
    process {
        switch -regex ($ModuleRawData) {

            # skip lines until we get to our sentinel line
            { $preamble -and $_ -eq "start             end                 module name" } { $preamble = $false; continue }

            #00007ff6`e9da0000 00007ff6`e9de3000   notepad    (deferred)
            #00007ffe`9ccb0000 00007ffe`9ce9d000   ntdll      (pdb symbols)          c:symbolsntdll.pdbB8AD79538F2730FD9BACE36C9F9316A01ntdll.pdb
            '^([0-9a-f`]{17})s([0-9a-f`]{17})s+(S+)s+(([^)]+))s*(.+)?' {
                # see breakdown of the expression later in the post
                # on record start, output the currentObject, if any is set
                if ($null -ne $currentObject) {
                    $currentObject
                }
                $start, $end, $module, $pdbKind, $pdbPath = $matches[1..5]
                # create an instance of the object that we are adding info from the current record into.
                $currentObject = [ExecutableModule] @{
                    Start        = $start
                    End          = $end
                    Name         = $module
                    SymbolStatus = $pdbKind
                    PdbPath      = $pdbPath
                }
                continue
            }
            '^s+Image was built with /Brepro flag.' {
                $currentObject.Reproducible = $true
                continue
            }
            '^s+Timestamp:s+[^(]+((?<timestamp>.{8}))' {
                # see breakdown of the regular  expression later in the post
                # Timestamp:        Mon Jan  7 23:42:30 2019 (5C33D5D6)
                $intValue = [Convert]::ToInt32($matches.timestamp, 16)
                $currentObject.TimeStamp = [DateTime]::new(1970, 01, 01, 0, 0, 0, [DateTimeKind]::Utc).AddSeconds($intValue)
                continue
            }
            '^s+TimeStamp:s+(?<value>.{8}) (This' {
                # Timestamp:        E78937AC (This is a reproducible build file hash, not a timestamp)
                $currentObject.FileHash = [Convert]::ToUInt32($matches.value, 16)
                continue
            }
            '^s+Loaded symbol image file: (?<imageFile>[^)]+)' {
                $currentObject.LoadedImageFile = $matches.imageFile
                continue
            }
            '^s+Checksum:s+(?<checksum>S+)' {
                $currentObject.Checksum = [Convert]::ToUInt32($matches.checksum, 16)
                continue
            }
            '^s+Translations:s+(?<value>S+)' {
                $currentObject.Translations = $matches.value.Split(".")
                continue
            }
            '^s+ImageSize:s+(?<imageSize>.{8})' {
                $currentObject.ImageSize = [Convert]::ToUInt32($matches.imageSize, 16)
                continue
            }
            '^s{4}(?<name>[^:]+):s+(?<value>.+)' {
                # see breakdown of the regular expression later in the post
                # This part is any 'name: value' pattern
                $name, $value = $matches['name', 'value']

                # project the property name
                $propName = $propertyNameMap[$name]
                $propName = if ($null -eq $propName) { $name } else { $propName }

                # note the dynamic property name in the assignment
                # this will fail if the property doesn't have a member with the specified name
                $currentObject.$propName = $value
                continue
            }
            'quit:' {
                # ignore and exit
                break
            }
            default {
                # When writing the parser, it can be useful to include a line like the one below to see the cases that are not handled by the parser
                # Write-Warning "missing case for '$_'. Unexpected output format from cdb.exe"

                continue # skip lines that doesn't match the patterns we are interested in, like the start/end/modulename header and the quit: output
            }
        }
    }
    end {
        # this is needed to output the last object
        if ($null -ne $currentObject) {
            $currentObject
        }
    }
}


Get-ExecutableModuleRawData Notepad |
    ConvertTo-ExecutableModule |
    Sort-Object ProductVersion, Name
    Format-Table -Property Name, FileVersion, Product_Version, FileDescription
Name               FileVersionStr                             ProductVersion FileDescription
----               --------------                             -------------- ---------------
PROPSYS            7.0.17763.1 (WinBuild.160101.0800)         7.0.17763.1    Microsoft Property System
ADVAPI32           10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Advanced Windows 32 Base API
bcrypt             10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Windows Cryptographic Primitives Library
...
uxtheme            10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Microsoft UxTheme Library
win32u             10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Win32u
WINSPOOL           10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Windows Spooler Driver
KERNELBASE         10.0.17763.134 (WinBuild.160101.0800)      10.0.17763.134 Windows NT BASE API Client DLL
wintypes           10.0.17763.134 (WinBuild.160101.0800)      10.0.17763.134 Windows Base Types DLL
SHELL32            10.0.17763.168 (WinBuild.160101.0800)      10.0.17763.168 Windows Shell Common Dll
...
windows_storage    10.0.17763.168 (WinBuild.160101.0800)      10.0.17763.168 Microsoft WinRT Storage API
CoreMessaging      10.0.17763.194                             10.0.17763.194 Microsoft CoreMessaging Dll
gdi32full          10.0.17763.194 (WinBuild.160101.0800)      10.0.17763.194 GDI Client DLL
ntdll              10.0.17763.194 (WinBuild.160101.0800)      10.0.17763.194 NT Layer DLL
RMCLIENT           10.0.17763.194 (WinBuild.160101.0800)      10.0.17763.194 Resource Manager Client
RPCRT4             10.0.17763.194 (WinBuild.160101.0800)      10.0.17763.194 Remote Procedure Call Runtime
combase            10.0.17763.253 (WinBuild.160101.0800)      10.0.17763.253 Microsoft COM for Windows
COMCTL32           6.10 (WinBuild.160101.0800)                10.0.17763.253 User Experience Controls Library
urlmon             11.00.17763.168 (WinBuild.160101.0800)     11.0.17763.168 OLE32 Extensions for Win32
iertutil           11.00.17763.253 (WinBuild.160101.0800)     11.0.17763.253 Run time utility for Internet Explorer

Regex pattern breakdown

Here is a breakdown of the more complicated patterns, using the ignore pattern whitespace modifier x:

([0-9a-f`]{17})s([0-9a-f`]{17})s+(S+)s+(([^)]+))s*(.+)?

# example input: 00007ffe`9ccb0000 00007ffe`9ce9d000   ntdll      (pdb symbols)          c:symbolsntdll.pdbB8AD79538F2730FD9BACE36C9F9316A01ntdll.pdb

(?x)                # ignore pattern whitespace
^                   # the beginning of the line
([0-9a-f`]{17})     # capture expression like 00007ff6`e9da0000 - any hex number or backtick, and exactly 17 of them
s                  # a space
([0-9a-f`]{17})     # capture expression like 00007ff6`e9da0000 - any hex number or backtick, and exactly 17 of them
s+                 # skip any number of spaces
(S+)               # capture until we get a space - this would match the 'ntdll' part
s+                 # skip one or more spaces
(                  # start parenthesis
    ([^)])         # capture anything but end parenthesis
)                  # end parenthesis
s*                 # skip zero or more spaces
(.+)?               # optionally capture any symbol file path

Breakdown of the name-value pattern:

^s+(?<name>[^:]+):s+(?<value>.+)

# example input:  File flags:       0 (Mask 3F)

(?x)                # ignore pattern whitespace
^                   # the beginning of the line
s+                 # require one or more spaces
(?<name>[^:]+)      # capture anything that is not a `:` into the named group "name"
:                   # require a comma
s+                 # require one or more spaces
(?<value>.+)        # capture everything until the end into the name group "value"

Breakdown of the timestamp pattern:

^s{4}Timestamp:s+[^(]+((?<timestamp>.{8}))

#example input:     Timestamp:        Mon Jan  7 23:42:30 2019 (5C33D5D6)

(?x)                # ignore pattern whitespace
^                   # the beginning of the line
s+                 # require one or more spaces
Timestamp:          # The literal text 'Timestamp:'
s+                 # require one or more spaces
[^(]+              # one or more of anything but a open parenthesis
(                  # a literal '('
(?<timestamp>.{8})  # 8 characters of anything, captured into the group 'timestamp'
)                  # a literal ')'

Gotchas – the Regex Cache

Something that can happen if you are writing a more complicated parser is the following:
The parser works well. You have 15 regular expressions in your switch statement and then you get some input you haven’t seen before, so you add a 16th regex.
All of a sudden, the performance of your parser tanks. WTF?

The .net regex implementation has a cache of recently used regexs. You can check the size of it like this:

PS> [regex]::CacheSize
15

# bump it
[regex]::CacheSize = 20

And now your parser is fast(er) again.

Bonus tip

I frequently use PowerShell to write (generate) my code:

Get-ExecutableModuleRawData pwsh |
    Select-String '^s+([^:]+):' |       # this pattern matches the module detail fields
    Foreach-Object {$_.matches.groups[1].value} |
    Select-Object -Unique |
    Foreach-Object -Begin   { "class ExecutableModuleData {" }`
                   -Process { "    [string] $" + ($_ -replace "s.", {[char]::ToUpperInvariant($_.Groups[0].Value[1])}) }`
                   -End     { "}" }

The output is

class ExecutableModuleData {
    [string] $LoadedSymbolImageFile
    [string] $ImagePath
    [string] $ImageName
    [string] $Timestamp
    [string] $CheckSum
    [string] $ImageSize
    [string] $FileVersion
    [string] $ProductVersion
    [string] $FileFlags
    [string] $FileOS
    [string] $FileType
    [string] $FileDate
    [string] $Translations
    [string] $CompanyName
    [string] $ProductName
    [string] $InternalName
    [string] $OriginalFilename
    [string] $ProductVersion
    [string] $FileVersion
    [string] $FileDescription
    [string] $LegalCopyright
    [string] $Comments
    [string] $LegalTrademarks
    [string] $PrivateBuild
}

It is not complete – I don’t have the fields from the record start, some types are incorrect and when run against some other executables a few other fields may appear.
But it is a very good starting point. And way more fun than typing it 🙂

Note that this example is using a new feature of the -replace operator – to use a ScriptBlock to determine what to replace with – that was added in PowerShell Core 6.1.

Bonus tip #2

A regular expression construct that I often find useful is non-greedy matching.
The example below shows the effect of the ? modifier, that can be used after * (zero or more) and + (one or more)

# greedy matching - match to the last occurrence of the following character (>)
if("<Tag>Text</Tag>" -match '<(.+)>') { $matches }
Name                           Value
----                           -----
1                              Tag>Text</Tag
0                              <Tag>Text</Tag>
# non-greedy matching - match to the first occurrence of the the following character (>)
if("<Tag>Text</Tag>" -match '<(.+?)>') { $matches }
Name                           Value
----                           -----
1                              Tag
0                              <Tag>

See Regex Repeat for more info on how to control pattern repetition.

Summary

In this post, we have looked at how the structure of a switch-based parser could look, and how it can be written so that it works as a part of the pipeline.
We have also looked at a few slightly more complicated regular expressions in some detail.

As we have seen, PowerShell has a plethora of options for parsing text, and most of them revolve around regular expressions.
My personal experience has been that the time I’ve invested in understanding the regex language was well invested.

Hopefully, this gives you a good start with the parsing tasks you have at hand.

Thanks to Jason Shirk, Mathias Jessen and Steve Lee for reviews and feedback.

Staffan Gustafsson, @StaffanGson, github

Staffan works at DICE in Stockholm, Sweden, as a Software Engineer and has been using PowerShell since the first public beta.
He was most seriously pleased when PowerShell was open sourced, and has since contributed bug fixes, new features and performance improvements.
Staffan is a speaker at PSConfEU and is always happy to talk PowerShell.

Data Loss Prevention – Human error, insider threats and the in-between

$
0
0

Do you remember the first or last time you found a user had shared sensitive information with the wrong people?

 

Companies dedicate large amounts of resources and money towards establishing an air tight DLP policy to detect and protect company data and prevent it from getting into the wrong hands, whether deliberately or by mistake. But no matter how good the technology, or how vigilant the security team, there is always a wildcard – end users.

 

“A company can often detect or control when an outsider (non-employee) tries to access company data either physically or electronically, and can mitigate the threat of an outsider stealing company property. However, the thief who is harder to detect and who could cause the most damage is the insider—the employee with legitimate access. That insider may steal solely for personal gain, or that insider may be a “spy”—someone who is stealing company information or products in order to benefit another organization or country.”

                -Introductory guide to identifying malicious insiders, U.S. Federal Bureau of Investigation (FBI)

 

Graphic.png

Figure 1: Statistics from the Insider Threat 2018 Report

 

From the above data we can see that insider threats are becoming a real concern for most organizations, and that active steps are taken to mitigate the risk inherent to these threats.

 

In this post we’ll discuss how regular users can expose sensitive data by wrongly classifying documents, how malicious users can take advantage of the encryption to exfiltrate data, and how Microsoft Cloud App Security’s new capability of scanning content in encrypted files, as well as the wider Microsoft Information Protection offering, can help organizations mitigate these risks.

 

The innocent mistake

While employees in the modern workplace are getting increasingly technologically savvy, and are finding new tools to improve their productivity, they aren’t always aware of the security implications of their actions.

 

Many of our customers are leveraging Microsoft Information Protection solutions to classify, label and protect their data. To minimize the impact on end users and their ability to be productive, these organizations often choose to empower their users to label documents themselves, by providing automatic suggestions but not auto-labeling or -protecting documents.

 

A user can inadvertently label a document containing highly confidential information with a low sensitivity label that applies minimal access restrictions. Since the file is already encrypted, it will not be scanned by the DLP solution, but might still be accessible to unauthorized people.

 

The malicious insider

A bigger threat with a much higher potential for damage, is the malicious insider. A malicious insider who is actively working on exfiltrating sensitive information from the organization, whether for personal gain, corporate espionage or other reasons.

 

This malicious user might exploit the ability to encrypt files to purposefully classify a file as low sensitivity while inserting highly sensitive data and then sharing it externally. As in the “mistake” scenario this will allow the file to pass the scanning of the DLP solution.

 

How does Microsoft Cloud App Security handle these risks?

Microsoft Cloud App Security has a wide set of tools targeted at handling insider threats. These include user behavior anomaly detections, cloud discovery anomaly detections, and the newly released ability to scan content of encrypted documents.

 

User anomaly detection

Microsoft Cloud App Security comes with a wide set of out-of-the-box anomaly detection policies that are activated by default as soon as the product is enabled. These detections look at the activities performed by users in sanctioned apps and define a usage baseline, leveraging UEBA capabilities to automatically identify any anomalous behaviors going forward.

 

An example of these types of detections, aimed at insider threats, is “Unusual file download activity by user”. This detection will create an alert whenever a user performs file downloads that differ from their usual pattern – a potential indicator of a data exfiltration attempt.

 

Cloud anomaly detection

In addition to the user anomaly detections for sanctioned apps, Cloud App Security also offers detections aimed at identifying suspicious behavior of users in unsanctioned applications. These detections are based on the data we get and analyze as part of our Cloud Discovery capabilities.

 

An example for such a detection is “Data exfiltration to unsanctioned apps”, which looks at the amount of data being uploaded by users to unsanctioned applications – one of the most common scenarios of insider threat data exfiltration.

 

Content inspection of encrypted files

We have recently released the ability for an admin to allow MCAS to scan the content of files that are protected by Azure Information Protection. After enabling this functionality, the admin can define MCAS file policies to inspect the content of encrypted files, and generate an alert, or take an action based on the match.

 

This functionality ensures that files are handled according to their actual content, even if they are labeled incorrectly; thus, preventing sensitive data from leaving the organization – both by mistake and by design.

 

blah.png

Figure 2: Policy setting to allow Microsoft Cloud App Security to scan files protected with AIP

 

Human error and malicious intent will forever be a part of organizational lifecycles. While we cannot eliminate them completely, it’s our goal to enable IT and Security admins to minimize this risk. With our advanced capabilities and unique set of insights, Microsoft Cloud App Security and the wider Microsoft Information Protection offering help organizations to protect their sensitive information – wherever it lives or travels.

 

More info and feedback

Learn how to get started with Microsoft Cloud App Security with our detailed technical documentation. Don’t have Microsoft Cloud App Security? Start a free trial today!

 

As always, we want to hear from you! If you have any suggestions, questions, or comments, please visit us on our Tech Community page.

 

Learn more about Microsoft Information Protection.

Configuration Manager: ‘The encryption type requested is not supported by the KDC’ Error When Running Reports

$
0
0

___________________________________________________________________________________________________________________________

IMPORTANT ANNOUNCEMENT FOR OUR READERS!

AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at https://aka.ms/CISTechComm (hosted at https://techcommunity.microsoft.com). Please bear with us while we are still under construction!

We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either https://aka.ms/askpfeplat as you do today, or at our new site https://aka.ms/CISTechComm. Please feel free to update your bookmarks accordingly!

Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.

If you have never visited the TechCommunity site, it can be found at https://techcommunity.microsoft.com. On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.

NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at https://aka.ms/PFETechComm!

As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!

__________________________________________________________________________________________________________________________

 

Introduction

Hello, my name is Richard McIver and I’m a Premier Field Engineer with Microsoft specializing in System Center Configuration Manager.

I was recently working with a customer who suddenly started receiving a strange KDC error when attempting to run Configuration Manager reports from either within the Administration Console or the Reporting Services web portal. It took quite a bit of troubleshooting to isolate the root cause, so I’d just like to share our findings and resolution steps.

 

Problem Description

When running Configuration Manager reports that rely on Role Based Access Control (RBAC), SQL Server Reporting Services (SSRS) will attempt to communicate with Active Directory via Kerberos authentication to resolve the Security Identifier (SID) of the user.

However, when this customer attempted to run reports with RBAC embedded, the following error was displayed and the report failed to load.

The DefaultValue expression for the report parameter ‘UserTokenSIDs’ contains an error: The encryption type requested is not supported by the KDC. (rsRuntimeErrorInExpression)

The customer environment was SQL Server 2016 Reporting Services running on Windows Server 2012 R2, however I’ve since been able to replicate this issue on Windows Server 2016 as well.

 

Root Cause Analysis

We eventually traced the root cause down to a security policy settings on the reporting point server that was recently configured via domain Group Policy Object (GPO).

Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesSecurity OptionsNetwork security: Configure encryption types allowed for Kerberos: AES128_HMAC_SH1, AES256_HMAC_SHA1, Future encryption types selected

As configured, this setting has the effect of limiting the encryption types allowed for Kerberos authentication from the reporting point server to only AES128, AES256, and Future encryption types.

However, the service account used by the SQL Reporting Services service was not properly configured to support these algorithms. Instead, SSRS was attempting to authenticate using the RC4 encryption type, which is no longer allowed on the server, resulting in the KDC error.

 

Remediation

In this case, the error can be resolved in one of two ways.

  1. Enable AES 128-bit and/or AES 256-bit encryption for the SQL Reporting Services service account
  2. Configure the Network security: Configure encryption types allowed for Kerberos policy setting on the reporting point server to include the RC4_HMAC_MD5 encryption type

Steps to enable AES encryption for the SQL Reporting Services service account

  1. Open Active Directory Users and Computers
  2. Browse to the user account used by SQL Reporting Services on the affected server
  3. Right-click the user account and select Properties
  4. Click on the Account tab
  5. Under Account options, check the box next to one or both of the following
    1. This account supports Kerberos AES 128 bit encryption
    2. This account supports Kerberos AES 256 bit encryption
    1. Click OK

Steps to configure the policy setting Network security: Configure encryption types allowed for Kerberos

Method 1 – Local Security Policy

  1. On the affected server, open an elevated command prompt
  2. Type SECPOL and hit Enter
  3. In the Local Security Policy management console, expand Local Policies and click on Security Options
  4. Scroll down in the let-hand pane until you find the setting Network security: Configure encryption types allowed for Kerberos
  5. Right-click this setting and select Properties
  6. In the Local Security Settings tab, check the box next to RC4_HMAC_MD5, AES128_HMAC_SHA1, AES256_HMAC_SHA1, and Future encryption types
  7. Click OK

Method 2 – Group Policy Object (GPO)

  1. Open the Group Policy Management console and edit a new or existing GPO
  2. In the Group Policy Management Editor, expand Computer ConfigurationPoliciesWindows SettingsSecurity SettingsLocal PoliciesSecurity Options
  3. Right-click on Network security: Configure encryption types allowed for Kerberos and click Properties
  4. On the Security Policy Setting tab, check the box to Define these policy settings
  5. Check the box next to RC4_HMAC_MD5, AES128_HMAC_SHA1, AES256_HMAC_SHA1, and Future encryption types
  6. Click OK

And that’s about for now… Hopefully this helps you out, and thanks for reading!

 

References:

Microsoft Intune introduces MDM Security Baselines to secure the modern workplace

$
0
0

(This post is authored in collaboration with Joey Glocke, Senior Program Manager, Microsoft 365 Security)

 

Today, enterprise IT pros and policy makers must frequently update Windows security settings to help mitigate evolving cyber-security threats. The one-size-fits-all security approach often does not work anymore because what is most concerning to one organization may be completely different from the threats faced by another organization. Administrators are faced with deploying the right security configuration from hundreds of available granular device management controls, without impacting operations or productivity. Microsoft Intune helps administrators navigate and select the right Windows 10 security features for their business by offering security baselines within the service.


A security baseline is a group of Microsoft-recommended configuration settings that explains their security impact. Industry-standard configuration that is broadly known and well-tested, such as Microsoft security baselines, increases efficiency and reduces costs compared to creating them all by yourself. These settings are continually updated with feedback from Microsoft security engineering teams, product groups, partners, and real-world learning from thousands of customers. Microsoft security baselines provide intelligent recommendations that are relevant to the needs of your business, based on your IT infrastructure.

 

Attach the power of intelligent cloud

 

Microsoft has years of experience publishing security baselines as Group Policy Objects in the Security and Compliance Toolkit (SCT). Customers have trusted this toolkit for years to provide templates to configure security baselines through Group Policy. Microsoft Intune now brings the same collective knowledge and expertise to secure the modern desktop with MDM security baselines.

 

Microsoft recommended security baselines in the Intune service leverage the greatly expanded manageability of Windows 10 using Mobile Device Management (MDM). These security baselines will be managed and updated directly from the cloud – providing customers the most recent and most advanced security settings and capabilities available from Microsoft 365. The same Windows security team that creates Group Policy security baselines has collaborated with Intune engineers to offer their extensive experience for these recommendations. If you're brand new to Intune, and not sure where to start, then MDM security baselines give you an advantage. You can quickly create and deploy a secure profile to help protect your organization's resources and data. If you're currently using Group Policy, migrating to Intune for management is much easier with these baselines natively built into Intune's modern management platform.

 

Intune MDM security baselines leverage intelligent cloud insights to deliver unique benefits beyond the security and compliance toolkit:

 

  • In-depth reporting on the state of each setting in the baseline on every device in your organization
  • A first-class policy interface using familiar Intune policies to easily customize and deploy a baseline with MDM
  • A versioning experience to stay up-to-date when Microsoft updates security baseline recommendations

 

You may choose to create security policies directly from these baselines and deploy them to users or customize the recommendations to meet the needs of your enterprise. Intune will validate that devices follow these baselines, report on baseline compliance and notify administrators if any devices or users move out of compliance.

 

Overview of MDM Security Baselines

 

Here’s an overview of various aspects of MDM security baselines in the Intune console. Please refer to Microsoft Intune product documentation for pre-requisites and guidance on deploying this feature:

 

1. Login to the Microsoft Intune administration center and look for the new “Security baselines” workspace in the left navigation:

 

1.png 

2. Review insights into the state of your Windows 10 devices against each published security baseline. Drill down to see more details and resolve the status, as appropriate

2.png

 

3. Create a security baseline profile using the familiar, customizable Intune policy interface

3.png

 

4. Easily deploy the security profiles to Azure Active Directory user groups

 4.png

 

Next steps


The public preview of MDM security baselines is now being rolled out to Microsoft Intune tenants. If you are a Microsoft Intune customer, look for the public preview to be available in your tenant shortly.


If you require any help with your deployment, Microsoft offers a variety of resources and support tools to help you succeed. Customers with eligible subscriptions to Microsoft 365, Microsoft Enterprise Mobility + Security (EMS) or Microsoft Intune can request assistance from experts in FastTrack service at no additional cost for the life of their subscription. Whether you are a customer or a partner, FastTrack provides customized guidance for onboarding and adoption, including access to Microsoft engineering expertise, best practices, tools, and resources so you can leverage existing resources to plan your deployment.

 

More info and feedback

Learn how to get started with Microsoft Intune using our detailed technical documentation. Don’t have Microsoft Intune? Start a free trial or buy a subscription today!

 

As always, we want to hear from you! If you have any suggestions, questions, or comments, please visit us on our Tech Community page.

 

Follow @MSIntune on Twitter

 

Securing Applications with Least Privileged Service Accounts

$
0
0

___________________________________________________________________________________________________________________________

IMPORTANT ANNOUNCEMENT FOR OUR READERS!

AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at https://aka.ms/CISTechComm (hosted at https://techcommunity.microsoft.com). Please bear with us while we are still under construction!

We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either https://aka.ms/askpfeplat as you do today, or at our new site https://aka.ms/CISTechComm. Please feel free to update your bookmarks accordingly!

Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.

If you have never visited the TechCommunity site, it can be found at https://techcommunity.microsoft.com. On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.

NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at https://aka.ms/PFETechComm!

As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!

__________________________________________________________________________________________________________________________

Hello all, Nathan Penn here to shed some light on how I identify the most restrictive settings for applications and their associated service accounts, while keeping things functioning. When security is paramount (which is always) and we are deploying enterprise applications to Windows systems, we must ensure that the level of access provided to any given application is just what it requires to function. For example, if installing an application like SQL, you may hear that the service account “requires” local or even domain administrator rights to operate. While this is the EASY way and will ensure functionality, it is NOT true and can be done in a much more secure manner with a little effort… and maybe a little magic! The software installation process itself will require local administrator rights to the system by the individual performing the install, however, the service account does not need this level of persistent permission. So what permission does a given service account need and how do you identify it? The method I highlight below will demonstrate how to accomplish just that.

To walk through this process, I will install SQL Server 2016 Express on a domain-joined Windows 2016 Member Server, and then apply the Microsoft Security Baseline for Server 2016 (optimally, do this in a lab first and then move to production). The server will be placed in a Staging/Test OU that has block inheritance enabled to ensure that we don’t have to fight security lockdowns during the initial installation and configuration. There is an assumption here that you are not enforcing GPOs at a higher OU or domain level that configure user rights assignment. In my environment this is the “T1-Staging” OU and as you can see from the Group Policy Inheritance tab, no GPOs are being applied at this time.

If working in production, we should ensure that no end users or production data are added to this system while in the Staging OU. This process enables us to confirm a known-good functionally configured state for any of our applications and identify any settings that might conflict with our environment baseline security policies. The account performing the install is only a Member Server Administrator to the local system, so there is no need for Domain Admin rights. The service account (CONTOSOs-sql) is a basic user account with no additional privileges or group memberships, as shown below.

As a first step, let’s look at the user rights assignments for our Windows 2016 Server that has no domain policies affecting it before the application install. On the server we launch the Local Policy Editor by running gpedit.msc and browsing to Local Computer Policy -> Computer Configuration -> Windows Settings -> Security Settings –> Local Policies -> User Rights Assignment to review the settings.

Now let’s start our software installation on the system. During the software installation, whenever prompted for service account, supply the associated service account and password.

After running the install, verifying that our installation was successful and checking that the application is functioning, we are now ready to revisit the User Rights Assignment as before and look for changes.

A quick review shows that there are updates adding accounts like NT SERVICEMSSQL$… and CONTOSOs-sql to several user right assignment policies as detailed here:

  • Adjust memory quotas for a process

  • Bypass traverse checking

  • Log on as a service

  • Replace a process level token

With this information, we can have our Group Policy admins create a GPO specific to our application that gives the service account (and any application local accounts as needed – i.e. NT SERVICEMSSQL…) just the level of permission required to the associated server.

Using an account and a system with access to create Active Directory Group Policies, launch the Group Policy Management Console, gpmc.msc, and create a customization GPO for the application. In this instance I created a policy named “SQL Server Customizations,” and I like to edit the properties of the policy to add comments on exactly what this policy is for.

Browse to Computer Configuration -> Windows Settings -> Security Settings –> Local Policies -> User Rights Assignment of the policy and define the settings we identified earlier.

IMPORTANT: A common mistake that is made is defining a user right assignment with only the new additions, i.e only adding the new SQL accounts to the “Replace a process level token” while leaving off Local Service and Network Service. When identifying and updating user rights assignments, you need to include ALL accounts that should have permissions, as this does not perform a merge with current settings, but instead performs a REPLACE of the user assignments.

Once we have finished defining the updated User Rights Assignment with our updates, we should have a domain GPO that looks similar to this:

After applying our GPO to our test OU, we can refresh group policy processing on our system and confirm that our settings are applying by executing a gpresult. As we can see, the User Rights Assignments are applied from our GPO.

At this point, I would recommend a reboot of the system so that we can confirm all settings are in affect at application start. Now we proceed with reaffirming that the applications services are starting with our policy applied.

Once satisfied that the application is still functional, we are ready to apply the security lockdowns by moving the Active Directory Server object from the block OU into a production OU that receives security lockdowns. In my environment, that means moving this server into my “T1-Database” OU where I have linked our SQL Server Customization policy. In addition to our policy the server will now also receive the Microsoft Security Baseline policy for Windows 2016 Servers (Available here – https://www.microsoft.com/en-us/download/details.aspx?id=55319).

We then refresh the group policies on our system and perform one more reboot to again ensure that all settings are in affect at application start and that our settings are applying by executing a gpresult. As we can see, the User Rights Assignments are being applied from our GPO as well as the Security Baseline.

Now we once again confirm that application services are still starting as expected.

There you have it – A service account that has no administrative privileges to the domain or even to the server itself, but still permits the application to function.

NOTE: If the application services are starting and running in the production OU, but no longer functioning as it was while in the Staging OU, this is not a service account issue but rather a lockdown configuration item such as FIPS, NTLM version, etc. More on how to identify and troubleshoot these in a future post.

Now get out there and start removing all those extra permissions from your service accounts. Hope this helps!

 

Introducing Remote Autopilot Reset in Intune for Education

$
0
0

The Intune for Education team is excited about the recently released Remote Autopilot Reset feature. This new functionality allows your school IT admin to reset devices from the Intune for Education console, hands free.

 

Autopilotresetgraphic1.png

 

Together, Windows Autopilot and Microsoft Intune for Education is helping schools take a modern approach to device provisioning and management in the classroom. The remote reset function is another great example of our focus on simplifying the management of devices, in a way that provides more time for teachers to teach, and a richer learning experience for students.

 

Traditionally, teachers or school IT admins have had to physically go to each device to initiate a PC reset. The old reset unenrolls the device from management and removes it from the network meaning the IT admin has to reconfigure the device in order to make it classroom ready. Now with Autopilot Reset, all user data including user-installed apps and personal settings are removed, while keeping the device enrolled in Intune and connected to Azure AD. This ensures the student’s device is kept up to date with all the latest apps, policies, and settings. The Autopilot Reset can be kicked off directly on the device, or remotely from the Intune for Education console.

 

Furthermore with the new remote option, you can Autopilot Reset a single device:

autoilotresetscreenshot1.png

 

or you can choose to Autopilot Reset all devices in a specific group, such as a classroom:

autopilotresetscreenshot2.png

 

helping IT admins and teachers, quickly wipe and reconfigure students' PCs in bulk to prepare them for a new school year. Learn more about Autopilot Reset here

 

Microsoft Intune announces device-only subscription for shared resources

$
0
0
The meaning of “devices” has evolved in the modern workplace, with IT expected to support not only corporate PCs and bring-your-own (BYO) devices, but also manage kiosks, shared single-purpose devices, phone-room resources, collaboration devices such as Surface Hub, and even some IoT devices. Microsoft Intune is the most comprehensive unified endpoint management platform to manage and secure this proliferation of endpoints in your organization. We are excited to share a licensing update today that further lowers your total cost of ownership (TCO).
 
Microsoft Intune is pleased to announce a new device-only subscription service that helps organizations manage devices that are not affiliated with specific users. The Intune device SKU is licensed per device per month. 
 
It is worth noting that device-based subscription does not allow you to take advantage of any user-based security and management features, including but not limited to email and calendaring, conditional access, and app protection policies. Device SKU also cannot be used for shared device scenarios where the device is managed through the user(s) on the device. Shared devices that are not affiliated with any user identity can leverage this license, for example, certain Android Enterprise purpose-build devices and kiosks as well as Windows kiosks. This license may provide compelling value for devices using enrollment methods such as Windows Autopilot self-deploying mode, Apple Business Manager or Google zero touch enrolment, where the devices are not associated with a user and no user targeted features are required, such as user-based enrollment, Intune Company Portal, conditional access, and such. 
 
For more information, please contact your Microsoft representative and visit https://www.microsoft.com/en-us/licensing/product-licensing/products 
 
(Updated 12/20 to clarify the self-deploying use-case for Windows Autopilot)

How to win the latest security race over NTLM relay

$
0
0

Detecting ExchangePriv vulnerability with Azure ATP

 

NTLM relay vulnerability is not a new phenomenon. With the added security mechanisms implemented in signed NTLMv2 making successful attacks seem more and more unlikely, it would appear there would be very little to talk about here. Right?

 

Wrong!

 

In fact, there are attack vectors that remain where NTLMv1 or unsigned NTLMv2 is relayed by attackers in the domain environment. In addition, although NTLMv1 and unsigned NTLMv2 should no longer be in use, our most recent research found that NTLMv1 is still commonly used in about 30-40% of the environments. These legacy protocols are used, by default, on servers running old versions of Windows (Windows Vista or Windows Server 2008 and earlier versions) but can also be seen in new versions which support backward compatibility, or processes that implement the authentication mechanism themselves (such as Python modules like “Impacket”). Furthermore, newly discovered vulnerabilities can lead to easy exploitation of domain controllers, even faster than previously thought possible.

 

Signed NTLMv2 has a signing and sealing mechanism that prevents tampering and relay impersonation. The version of NTLM, however, used in each domain depends on the source computer that initiates authentication. The source computer in different domains can be configured differently based on operating system version, LMCompatibilityLevel registry override or Group Policy Object (GPO) configuration. In other words, even if you are running newer versions of Windows and Active Directory servers, you may be running client services that still use NTLMv1 without realizing it, leaving your organization equally exposed.  

 

While new vulnerabilities in NTLM relay have occasionally been revealed, the most recent discovery from a few weeks ago, of remote NTLM triggering on-premises Exchange Servers against the original configuration is unique and especially concerning to organizations that still have NTLMv1 in use.

 

Red-teamer, Dirk-jan found that three vulnerabilities, when combined, can potentially be a new NTLM relay attack. Dirk-jan’s proposed triangle, is based on historical vulnerabilities of the NTLM challenge-response authentication method, and is especially relevant when NTLMv1 is in use, or less commonly deployed, but equally vulnerable, unsigned or unsealed NTLMv2.

 

In the proposed attack, Exchange Server can be configured, remotely by a user with an inbox on the Exchange Server, to trigger NTLM authentication with the Exchange Server account credentials to a malicious remote http server. The remote http server waits for the sensitive Exchange Server account to relay its authentication to any other server. Once Exchange Server account impersonation is targeted to an Active Directory Domain Controller, the sensitive permission of the Exchange Server account can be used to push changes in the directory over different protocols such as LDAP or LDAPS.

 

If the attacker succeeds in impersonating the Exchange Server account, they can even grab extended permissions to perform domain replication (“DcSync”) and also acquire credentials of all accounts in the domain.

 

When this new attack scenario was raised, Microsoft’s Azure Advanced Threat Protection’s (Azure ATP) security research team immediately started investigating this and realized the vulnerability was a real threat and created a new Azure ATP detection to alert SecOps teams if an attacker is leveraging this exploit. The new Azure ATP NTLM relay alert identifies use of Exchange Server account credentials from a suspicious source, alerts on the suspicious behavior, provides evidence and related entity information, and helps to swiftly remediate.

 

Screenshots from the Azure ATP portal of how the new alert looks when relaying from Linux or Windows machines are shown below. The first alert shows a detected relay that used NTLMv1 or unsigned (and not sealed) NTLMv2 protocol, and the second alert shows a detected relay that used secured NTLMv2 protocol, with suspicious IP address behavior.

 

SuspectedNTLM.pngFigure 1 – Medium severity Azure ATP alert detecting suspicious use of NTLMv1 or unsigned NTLMv2 protocol

NTLM2.pngFigure 2 - Low severity Azure ATP alert detecting suspicious use of signed or sealed NTLMv2 against non-Exchange servers

 

We strongly recommend forcing the use of NTLMv2 in a domain. Force use via the Network security: LAN Manager authentication level, group policy. To learn more about force use of NTLMv2 see how to set the group policy on Domain Controllers or on Windows clients.

 

You can learn more about LDAP best practices for client signing requirements here.

 

Make your organization more secure with Azure ATP by leveraging the scale and intelligence of the Microsoft Intelligent Security Graph as part of Microsoft 365’s E5 Suite.

 

Get Started Today

We’re moving!

$
0
0

After a many years on Technet, Microsoft is retiring the existing blogging platform.  We will soon be migrating over to a Tech Community which many of you have already been familiarized with.  We appreciate all the many helpful comments and consistent viewership we’ve seen over the years and hope you’ll join us at our new home!

The new site is not yet active and we’ll update once ready.  New Networking content from the Windows Core Networking team at Microsoft will appear here. For simplicity, you can locate our blog with the following link: https://aka.ms/MSFTNetworkBlog

Please Note:

  1. Existing posts will be retained
  2. Comments will unfortunately not be retained
  3. We’ll plan to release new content (when we have new content) on Wednesdays

Signing off,

Windows Core Networking team

LogicalStandard Switch Deployment Failures in System Center Virtual Machine Manager 2016 (UR 6)

$
0
0

___________________________________________________________________________________________________________________________

IMPORTANT ANNOUNCEMENT FOR OUR READERS!

AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at https://aka.ms/CISTechComm (hosted at https://techcommunity.microsoft.com). Please bear with us while we are still under construction!

We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either https://aka.ms/askpfeplat as you do today, or at our new site https://aka.ms/CISTechComm. Please feel free to update your bookmarks accordingly!

Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.

If you have never visited the TechCommunity site, it can be found at https://techcommunity.microsoft.com. On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.

NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at https://aka.ms/PFETechComm!

As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!

__________________________________________________________________________________________________________________________

 Hi everyone, Chuck Timon here to talk to you about an issue my customer contacted me on recently, reporting an issue with Logical Switch deployments on two nodes of a four node Windows Server 2016 Failover Cluster managed by System Center Virtual Machine Manager 2016 (UR6). The error message is shown here –


The decode for the error –


I asked the customer to collect a Carmine trace to capture additional details of the failure.

The initial exception –


The final failure –


As a test, I asked the customer to deploy a Standard Switch which failed as well. I asked the customer to verify he could create a simple external Hyper-V switch on the hosts and that completed successfully.

I was aware of a problem in SCVMM with Logical Switches ‘disappearing’ as documented here – System Center Virtual Machine Manager fails to enumerate and manage Logical switch deployed on the host. The customer reviewed the blog and reported that none of the symptoms applied to his environment. We decided to go ahead and re-compile the .mof files mentioned in the blog anyway. After that action completed successfully, the Hosts were refreshed in the SCVMM console. Another attempt was made to deploy a Logical Switch, and the failure was the same. The cluster node was rebooted, and after that, the Logical Switch deployment completed successfully. The process was duplicated on the other node and in the end all Logical Switch deployments were completed, and the cluster was up and running properly. The conclusion I reached was perhaps the WMI service needed to be restarted which, obviously, occurred as a result of the reboot.

Thanks for your attention and I hope this information was useful to you.

Charles Timon, Jr.
Senior, Premiere Field Engineer
Microsoft Corporation

 

Parsing Text with PowerShell (3/3)

$
0
0

This is the third and final post in a three-part series.

  • Part 1:
    • Useful methods on the String class
    • Introduction to Regular Expressions
    • The Select-String cmdlet
  • Part 2:
    • the -split operator
    • the -match operator
    • the switch statement
    • the Regex class
  • Part 3:
    • a real world, complete and slightly bigger, example of a switch-based parser
      • General structure of a switch-based parser
      • The real world example

In the previous posts, we looked at the different operators what are available to us in PowerShell.

When analyzing crashes at DICE, I noticed that some of the C++ runtime binaries where missing debug symbols. They should be available for download from Microsoft’s public symbol server, and most versions were there. However, due to some process errors at DevDiv, some builds were released publicly without available debug symbols.
In some cases, those missing symbols prevented us from debugging those crashes, and in all cases, they triggered my developer OCD.

So, to give actionable feedback to Microsoft, I scripted a debugger (cdb.exe in this case) to give a verbose list of the loaded modules, and parsed the output with PowerShell, which was also later used to group and filter the resulting data set. I sent this data to Microsoft, and 5 days later, the missing symbols were available for download. Mission accomplished!

This post will describe the parser I wrote for this task (it turned out that I had good use for it for other tasks later), and the general structure is applicable to most parsing tasks.

The example will show how a switch-based parser would look when the input data isn’t as tidy as it normally is in examples, but messy – as the real world data often is.

General Structure of a switch Based Parser

Depending on the structure of our input, the code must be organized in slightly different ways.

Input may have a record start that differs by indentation or some distinct token like

Foo                    &lt;- Record start - No whitespace at the beginning of the line
    Prop1=Staffan      &lt;- Properties for the record - starts with whitespace
    Prop3 =ValueN
Bar
    Prop1=Steve
    Prop2=ValueBar2

If the data to be parsed has an explicit start record, it is a bit easier than if it doesn’t have one.
We create a new data object when we get a record start, after writing any previously created object to the pipeline.
At the end, we need to check if we have parsed a record that hasn’t been written to the pipeline.

The general structure of a such a switch-based parser can be as follows:

$inputData = @"
Foo
    Prop1=Value1
    Prop3=Value3
Bar
    Prop1=ValueBar1
    Prop2=ValueBar2
"@ -split 'r?n'   # This regex is useful to split at line endings, with or without carriage return

class SomeDataClass {
    $ID
    $Name
    $Property2
    $Property3
}

# map to project input property names to the properties on our data class
$propertyNameMap = @{
    Prop1 = "Name"
    Prop2 = "Property2"
    Prop3 = "Property3"
}

$currentObject = $null
switch -regex ($inputData) {

    '^(S.*)' {
        # record start pattern, in this case line that doesn't start with a whitespace.
        if ($null -ne $currentObject) {
            $currentObject                   # output to pipeline if we have a previous data object
        }
        $currentObject = [SomeDataClass] @{  # create new object for this record
            Id = $matches.1                  # with Id like Foo or Bar
        }
        continue
    }

    # set the properties on the data object
    '^s+([^=]+)=(.*)' {
        $name, $value = $matches[1, 2]
        # project property names
        $propName = $propertyNameMap[$name]
        if ($propName = $null) {
            $propName = $name
        }
        # assign the parsed value to the projected property name
        $currentObject.$propName = $value
        continue
    }
}

if ($currentObject) {
    # Handle the last object if any
    $currentObject # output to pipeline
}

ID  Name      Property2 Property3
--  ----      --------- ---------
Foo Value1              Value3
Bar ValueBar1 ValueBar2

Alternatively, we may have input where the records are separated by a blank line, but without any obvious record start.

commitId=1234                         &lt;- In this case, a commitId is first in a record
description=Update readme.md
                                      &lt;- the blank line separates records
user=Staffan                          &lt;- For this record, a user property comes first
commitId=1235
description=Fix bug.md

In this case the structure of the code looks a bit different. We create an object at the beginning, but keep track of if it’s dirty or not.
If we get to the end with a dirty object, we must output it.

$inputData = @"

commit=1234
desc=Update readme.md

user=Staffan
commit=1235
desc=Bug fix

"@ -split "r?n"

class SomeDataClass {
    [int] $CommitId
    [string] $Description
    [string] $User
}

# map to project input property names to the properties on our data class
# we only need to provide the ones that are different. 'User' works fine as it is.
$propertyNameMap = @{
    commit = "CommitId"
    desc   = "Description"
}

$currentObject = [SomeDataClass]::new()
$objectDirty = $false
switch -regex ($inputData) {
    # set the properties on the data object
    '^([^=]+)=(.*)$' {
        # parse a name/value
        $name, $value = $matches[1, 2]
        # project property names
        $propName = $propertyNameMap[$name]
        if ($null -eq $propName) {
            $propName = $name
        }
        # assign the projected property
        $currentObject.$propName = $value
        $objectDirty = $true
        continue
    }

    '^s*$' {
        # separator pattern, in this case any blank line
        if ($objectDirty) {
            $currentObject                           # output to pipeline
            $currentObject = [SomeDataClass]::new()  # create new object
            $objectDirty = $false                    # and mark it as not dirty
        }
    }
    default {
        Write-Warning "Unexpected input: '$_'"
    }
}

if ($objectDirty) {
    # Handle the last object if any
    $currentObject # output to pipeline
}

CommitId Description      User
-------- -----------      ----
    1234 Update readme.md
    1235 Bug fix          Staffan

The Real World Example

I have adapted this sample slightly so that I get the loaded modules from a running process instead of from my crash dumps. The format of the output from the debugger is the same.
The following command launches a command line debugger on notepad, with a script that gives a verbose listing of the loaded modules, and quits:

# we need to muck around with the console output encoding to handle the trademark chars
# imagine no encodings
# it's easy if you try
# no code pages below us
# above us only sky
[Console]::OutputEncoding = [System.Text.Encoding]::GetEncoding("iso-8859-1")

$proc = Start-Process notepad -passthru
Start-Sleep -seconds 1
$cdbOutput = cdb -y 'srv*c:symbols*http://msdl.microsoft.com/download/symbols' -c ".reload -f;lmv;q" -p $proc.ProcessID

The output of the command above is here for those who want to follow along but who aren’t running windows or don’t have cdb.exe installed.

The (abbreviated) output looks like this:

Microsoft (R) Windows Debugger Version 10.0.16299.15 AMD64
Copyright (c) Microsoft Corporation. All rights reserved.

*** wait with pending attach

************* Path validation summary **************
Response                         Time (ms)     Location
Deferred                                       srv*c:symbols*http://msdl.microsoft.com/download/symbols
Symbol search path is: srv*c:symbols*http://msdl.microsoft.com/download/symbols
Executable search path is:
ModLoad: 00007ff6`e9da0000 00007ff6`e9de3000   C:Windowssystem32notepad.exe
...
ModLoad: 00007ffe`97d80000 00007ffe`97db1000   C:WINDOWSSYSTEM32ntmarta.dll
(98bc.40a0): Break instruction exception - code 80000003 (first chance)
ntdll!DbgBreakPoint:
00007ffe`9cd53050 cc              int     3
0:007&gt; cdb: Reading initial command '.reload -f;lmv;q'
Reloading current modules
.....................................................
start             end                 module name
00007ff6`e9da0000 00007ff6`e9de3000   notepad    (pdb symbols)          c:symbolsnotepad.pdb2352C62CDF448257FDBDDA4081A8F9081notepad.pdb
    Loaded symbol image file: C:Windowssystem32notepad.exe
    Image path: C:Windowssystem32notepad.exe
    Image name: notepad.exe
    Image was built with /Brepro flag.
    Timestamp:        329A7791 (This is a reproducible build file hash, not a timestamp)
    CheckSum:         0004D15F
    ImageSize:        00043000
    File version:     10.0.17763.1
    Product version:  10.0.17763.1
    File flags:       0 (Mask 3F)
    File OS:          40004 NT Win32
    File type:        1.0 App
    File date:        00000000.00000000
    Translations:     0409.04b0
    CompanyName:      Microsoft Corporation
    ProductName:      Microsoft??? Windows??? Operating System
    InternalName:     Notepad
    OriginalFilename: NOTEPAD.EXE
    ProductVersion:   10.0.17763.1
    FileVersion:      10.0.17763.1 (WinBuild.160101.0800)
    FileDescription:  Notepad
    LegalCopyright:   ??? Microsoft Corporation. All rights reserved.
...
00007ffe`9ccb0000 00007ffe`9ce9d000   ntdll      (pdb symbols)          c:symbolsntdll.pdbB8AD79538F2730FD9BACE36C9F9316A01ntdll.pdb
    Loaded symbol image file: C:WINDOWSSYSTEM32ntdll.dll
    Image path: C:WINDOWSSYSTEM32ntdll.dll
    Image name: ntdll.dll
    Image was built with /Brepro flag.
    Timestamp:        E8B54827 (This is a reproducible build file hash, not a timestamp)
    CheckSum:         001F20D1
    ImageSize:        001ED000
    File version:     10.0.17763.194
    Product version:  10.0.17763.194
    File flags:       0 (Mask 3F)
    File OS:          40004 NT Win32
    File type:        2.0 Dll
    File date:        00000000.00000000
    Translations:     0409.04b0
    CompanyName:      Microsoft Corporation
    ProductName:      Microsoft??? Windows??? Operating System
    InternalName:     ntdll.dll
    OriginalFilename: ntdll.dll
    ProductVersion:   10.0.17763.194
    FileVersion:      10.0.17763.194 (WinBuild.160101.0800)
    FileDescription:  NT Layer DLL
    LegalCopyright:   ??? Microsoft Corporation. All rights reserved.
quit:

The output starts with info that I’m not interested in here. I only want to get the detailed information about the loaded modules. It is not until the line

start             end                 module name

that I care about the output.

Also, at the end there is a line that we need to be aware of:

quit:

that is not part of the module output.

To skip the parts of the debugger output that we don’t care about, we have a boolean flag initially set to true.
If that flag is set, we check if the current line, $_, is the module header in which case we flip the flag.

$inPreamble = $true
    switch -regex ($cdbOutput) {

        { $inPreamble -and $_ -eq "start             end                 module name" } { $inPreamble = $false; continue }

I have made the parser a separate function that reads its input from the pipeline. This way, I can use the same function to parse module data, regardless of how I got the module data. Maybe it was saved on a file. Or came from a dump, or a live process. It doesn’t matter, since the parser is decoupled from the data retrieval.

After the sample, there is a breakdown of the more complicated regular expressions used, so don’t despair if you don’t understand them at first.
Regular Expressions are notoriously hard to read, so much so that they make Perl look readable in comparison.

# define an class to store the data
class ExecutableModule {
    [string]   $Name
    [string]   $Start
    [string]   $End
    [string]   $SymbolStatus
    [string]   $PdbPath
    [bool]     $Reproducible
    [string]   $ImagePath
    [string]   $ImageName
    [DateTime] $TimeStamp
    [uint32]   $FileHash
    [uint32]   $CheckSum
    [uint32]   $ImageSize
    [version]  $FileVersion
    [version]  $ProductVersion
    [string]   $FileFlags
    [string]   $FileOS
    [string]   $FileType
    [string]   $FileDate
    [string[]] $Translations
    [string]   $CompanyName
    [string]   $ProductName
    [string]   $InternalName
    [string]   $OriginalFilename
    [string]   $ProductVersionStr
    [string]   $FileVersionStr
    [string]   $FileDescription
    [string]   $LegalCopyright
    [string]   $LegalTrademarks
    [string]   $LoadedImageFile
    [string]   $PrivateBuild
    [string]   $Comments
}

<#
.SYNOPSIS Runs a debugger on a program to dump its loaded modules
#>
function Get-ExecutableModuleRawData {
    param ([string] $Program)
    $consoleEncoding = [Console]::OutputEncoding
    [Console]::OutputEncoding = [System.Text.Encoding]::GetEncoding("iso-8859-1")
    try {
        $proc = Start-Process $program -PassThru
        Start-Sleep -Seconds 1  # sleep for a while so modules are loaded
        cdb -y srv*c:symbols*http://msdl.microsoft.com/download/symbols -c ".reload -f;lmv;q" -p $proc.Id
        $proc.Close()
    }
    finally {
        [Console]::OutputEncoding = $consoleEncoding
    }
}

<#
.SYNOPSIS Converts verbose module data from windows debuggers into ExecutableModule objects.
#>
function ConvertTo-ExecutableModule {
    [OutputType([ExecutableModule])]
    param (
        [Parameter(ValueFromPipeline)]
        [string[]] $ModuleRawData
    )
    begin {
        $currentObject = $null
        $preamble = $true
        $propertyNameMap = @{
            'File flags'      = 'FileFlags'
            'File OS'         = 'FileOS'
            'File type'       = 'FileType'
            'File date'       = 'FileDate'
            'File version'    = 'FileVersion'
            'Product version' = 'ProductVersion'
            'Image path'      = 'ImagePath'
            'Image name'      = 'ImageName'
            'FileVersion'     = 'FileVersionStr'
            'ProductVersion'  = 'ProductVersionStr'
        }
    }
    process {
        switch -regex ($ModuleRawData) {

            # skip lines until we get to our sentinel line
            { $preamble -and $_ -eq "start             end                 module name" } { $preamble = $false; continue }

            #00007ff6`e9da0000 00007ff6`e9de3000   notepad    (deferred)
            #00007ffe`9ccb0000 00007ffe`9ce9d000   ntdll      (pdb symbols)          c:symbolsntdll.pdbB8AD79538F2730FD9BACE36C9F9316A01ntdll.pdb
            '^([0-9a-f`]{17})s([0-9a-f`]{17})s+(S+)s+(([^)]+))s*(.+)?' {
                # see breakdown of the expression later in the post
                # on record start, output the currentObject, if any is set
                if ($null -ne $currentObject) {
                    $currentObject
                }
                $start, $end, $module, $pdbKind, $pdbPath = $matches[1..5]
                # create an instance of the object that we are adding info from the current record into.
                $currentObject = [ExecutableModule] @{
                    Start        = $start
                    End          = $end
                    Name         = $module
                    SymbolStatus = $pdbKind
                    PdbPath      = $pdbPath
                }
                continue
            }
            '^s+Image was built with /Brepro flag.' {
                $currentObject.Reproducible = $true
                continue
            }
            '^s+Timestamp:s+[^(]+((?<timestamp>.{8}))' {
                # see breakdown of the regular  expression later in the post
                # Timestamp:        Mon Jan  7 23:42:30 2019 (5C33D5D6)
                $intValue = [Convert]::ToInt32($matches.timestamp, 16)
                $currentObject.TimeStamp = [DateTime]::new(1970, 01, 01, 0, 0, 0, [DateTimeKind]::Utc).AddSeconds($intValue)
                continue
            }
            '^s+TimeStamp:s+(?<value>.{8}) (This' {
                # Timestamp:        E78937AC (This is a reproducible build file hash, not a timestamp)
                $currentObject.FileHash = [Convert]::ToUInt32($matches.value, 16)
                continue
            }
            '^s+Loaded symbol image file: (?<imageFile>[^)]+)' {
                $currentObject.LoadedImageFile = $matches.imageFile
                continue
            }
            '^s+Checksum:s+(?<checksum>S+)' {
                $currentObject.Checksum = [Convert]::ToUInt32($matches.checksum, 16)
                continue
            }
            '^s+Translations:s+(?<value>S+)' {
                $currentObject.Translations = $matches.value.Split(".")
                continue
            }
            '^s+ImageSize:s+(?<imageSize>.{8})' {
                $currentObject.ImageSize = [Convert]::ToUInt32($matches.imageSize, 16)
                continue
            }
            '^s{4}(?<name>[^:]+):s+(?<value>.+)' {
                # see breakdown of the regular expression later in the post
                # This part is any 'name: value' pattern
                $name, $value = $matches['name', 'value']

                # project the property name
                $propName = $propertyNameMap[$name]
                $propName = if ($null -eq $propName) { $name } else { $propName }

                # note the dynamic property name in the assignment
                # this will fail if the property doesn't have a member with the specified name
                $currentObject.$propName = $value
                continue
            }
            'quit:' {
                # ignore and exit
                break
            }
            default {
                # When writing the parser, it can be useful to include a line like the one below to see the cases that are not handled by the parser
                # Write-Warning "missing case for '$_'. Unexpected output format from cdb.exe"

                continue # skip lines that doesn't match the patterns we are interested in, like the start/end/modulename header and the quit: output
            }
        }
    }
    end {
        # this is needed to output the last object
        if ($null -ne $currentObject) {
            $currentObject
        }
    }
}


Get-ExecutableModuleRawData Notepad |
    ConvertTo-ExecutableModule |
    Sort-Object ProductVersion, Name
    Format-Table -Property Name, FileVersion, Product_Version, FileDescription

Name               FileVersionStr                             ProductVersion FileDescription
----               --------------                             -------------- ---------------
PROPSYS            7.0.17763.1 (WinBuild.160101.0800)         7.0.17763.1    Microsoft Property System
ADVAPI32           10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Advanced Windows 32 Base API
bcrypt             10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Windows Cryptographic Primitives Library
...
uxtheme            10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Microsoft UxTheme Library
win32u             10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Win32u
WINSPOOL           10.0.17763.1 (WinBuild.160101.0800)        10.0.17763.1   Windows Spooler Driver
KERNELBASE         10.0.17763.134 (WinBuild.160101.0800)      10.0.17763.134 Windows NT BASE API Client DLL
wintypes           10.0.17763.134 (WinBuild.160101.0800)      10.0.17763.134 Windows Base Types DLL
SHELL32            10.0.17763.168 (WinBuild.160101.0800)      10.0.17763.168 Windows Shell Common Dll
...
windows_storage    10.0.17763.168 (WinBuild.160101.0800)      10.0.17763.168 Microsoft WinRT Storage API
CoreMessaging      10.0.17763.194                             10.0.17763.194 Microsoft CoreMessaging Dll
gdi32full          10.0.17763.194 (WinBuild.160101.0800)      10.0.17763.194 GDI Client DLL
ntdll              10.0.17763.194 (WinBuild.160101.0800)      10.0.17763.194 NT Layer DLL
RMCLIENT           10.0.17763.194 (WinBuild.160101.0800)      10.0.17763.194 Resource Manager Client
RPCRT4             10.0.17763.194 (WinBuild.160101.0800)      10.0.17763.194 Remote Procedure Call Runtime
combase            10.0.17763.253 (WinBuild.160101.0800)      10.0.17763.253 Microsoft COM for Windows
COMCTL32           6.10 (WinBuild.160101.0800)                10.0.17763.253 User Experience Controls Library
urlmon             11.00.17763.168 (WinBuild.160101.0800)     11.0.17763.168 OLE32 Extensions for Win32
iertutil           11.00.17763.253 (WinBuild.160101.0800)     11.0.17763.253 Run time utility for Internet Explorer

Regex pattern breakdown

Here is a breakdown of the more complicated patterns, using the ignore pattern whitespace modifier x:

([0-9a-f`]{17})s([0-9a-f`]{17})s+(S+)s+(([^)]+))s*(.+)?

# example input: 00007ffe`9ccb0000 00007ffe`9ce9d000   ntdll      (pdb symbols)          c:symbolsntdll.pdbB8AD79538F2730FD9BACE36C9F9316A01ntdll.pdb

(?x)                # ignore pattern whitespace
^                   # the beginning of the line
([0-9a-f`]{17})     # capture expression like 00007ff6`e9da0000 - any hex number or backtick, and exactly 17 of them
s                  # a space
([0-9a-f`]{17})     # capture expression like 00007ff6`e9da0000 - any hex number or backtick, and exactly 17 of them
s+                 # skip any number of spaces
(S+)               # capture until we get a space - this would match the 'ntdll' part
s+                 # skip one or more spaces
(                  # start parenthesis
    ([^)])         # capture anything but end parenthesis
)                  # end parenthesis
s*                 # skip zero or more spaces
(.+)?               # optionally capture any symbol file path

Breakdown of the name-value pattern:

^s+(?<name>[^:]+):s+(?<value>.+)

# example input:  File flags:       0 (Mask 3F)

(?x)                # ignore pattern whitespace
^                   # the beginning of the line
s+                 # require one or more spaces
(?<name>[^:]+)      # capture anything that is not a `:` into the named group "name"
:                   # require a comma
s+                 # require one or more spaces
(?<value>.+)        # capture everything until the end into the name group "value"

Breakdown of the timestamp pattern:

^s{4}Timestamp:s+[^(]+((?<timestamp>.{8}))

#example input:     Timestamp:        Mon Jan  7 23:42:30 2019 (5C33D5D6)

(?x)                # ignore pattern whitespace
^                   # the beginning of the line
s+                 # require one or more spaces
Timestamp:          # The literal text 'Timestamp:'
s+                 # require one or more spaces
[^(]+              # one or more of anything but a open parenthesis
(                  # a literal '('
(?<timestamp>.{8})  # 8 characters of anything, captured into the group 'timestamp'
)                  # a literal ')'

Gotchas – the Regex Cache

Something that can happen if you are writing a more complicated parser is the following:
The parser works well. You have 15 regular expressions in your switch statement and then you get some input you haven’t seen before, so you add a 16th regex.
All of a sudden, the performance of your parser tanks. WTF?

The .net regex implementation has a cache of recently used regexs. You can check the size of it like this:

PS> [regex]::CacheSize
15

# bump it
[regex]::CacheSize = 20

And now your parser is fast(er) again.

Bonus tip

I frequently use PowerShell to write (generate) my code:

Get-ExecutableModuleRawData pwsh |
    Select-String '^s+([^:]+):' |       # this pattern matches the module detail fields
    Foreach-Object {$_.matches.groups[1].value} |
    Select-Object -Unique |
    Foreach-Object -Begin   { "class ExecutableModuleData {" }`
                   -Process { "    [string] $" + ($_ -replace "s.", {[char]::ToUpperInvariant($_.Groups[0].Value[1])}) }`
                   -End     { "}" }

The output is

class ExecutableModuleData {
    [string] $LoadedSymbolImageFile
    [string] $ImagePath
    [string] $ImageName
    [string] $Timestamp
    [string] $CheckSum
    [string] $ImageSize
    [string] $FileVersion
    [string] $ProductVersion
    [string] $FileFlags
    [string] $FileOS
    [string] $FileType
    [string] $FileDate
    [string] $Translations
    [string] $CompanyName
    [string] $ProductName
    [string] $InternalName
    [string] $OriginalFilename
    [string] $ProductVersion
    [string] $FileVersion
    [string] $FileDescription
    [string] $LegalCopyright
    [string] $Comments
    [string] $LegalTrademarks
    [string] $PrivateBuild
}

It is not complete – I don’t have the fields from the record start, some types are incorrect and when run against some other executables a few other fields may appear.
But it is a very good starting point. And way more fun than typing it 🙂

Note that this example is using a new feature of the -replace operator – to use a ScriptBlock to determine what to replace with – that was added in PowerShell Core 6.1.

Bonus tip #2

A regular expression construct that I often find useful is non-greedy matching.
The example below shows the effect of the ? modifier, that can be used after * (zero or more) and + (one or more)

# greedy matching - match to the last occurrence of the following character (>)
if("<Tag>Text</Tag>" -match '<(.+)>') { $matches }

Name                           Value
----                           -----
1                              Tag&gt;Text&lt;/Tag
0                              &lt;Tag&gt;Text&lt;/Tag&gt;

# non-greedy matching - match to the first occurrence of the the following character (>)
if("<Tag>Text</Tag>" -match '<(.+?)>') { $matches }

Name                           Value
----                           -----
1                              Tag
0                              &lt;Tag&gt;

See Regex Repeat for more info on how to control pattern repetition.

Summary

In this post, we have looked at how the structure of a switch-based parser could look, and how it can be written so that it works as a part of the pipeline.
We have also looked at a few slightly more complicated regular expressions in some detail.

As we have seen, PowerShell has a plethora of options for parsing text, and most of them revolve around regular expressions.
My personal experience has been that the time I’ve invested in understanding the regex language was well invested.

Hopefully, this gives you a good start with the parsing tasks you have at hand.

Thanks to Jason Shirk, Mathias Jessen and Steve Lee for reviews and feedback.

Staffan Gustafsson, @StaffanGson, github

Staffan works at DICE in Stockholm, Sweden, as a Software Engineer and has been using PowerShell since the first public beta.
He was most seriously pleased when PowerShell was open sourced, and has since contributed bug fixes, new features and performance improvements.
Staffan is a speaker at PSConfEU and is always happy to talk PowerShell.

The post Parsing Text with PowerShell (3/3) appeared first on Powershell.

DSC Resource Kit Release February 2019

$
0
0

We just released the DSC Resource Kit!

This release includes updates to 14 DSC resource modules. In the past 6 weeks, 126 pull requests have been merged and 102 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • ActiveDirectoryCSDsc
  • CertificateDsc
  • ComputerManagementDsc
  • DFSDsc
  • NetworkingDsc
  • PSDscResources
  • SharePointDsc
  • SqlServerDsc
  • StorageDsc
  • xActiveDirectory
  • xExchange
  • xHyper-V
  • xPSDesiredStateConfiguration
  • xWebAdministration

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

Our latest community call for the DSC Resource Kit was last Wednesday, February 13. We were not able to record the call this time, apologies. We will fix this for the next call. You can join us for the next call at 12PM (Pacific time) on March 27 to ask questions and give feedback about your experience with the DSC Resource Kit.

The next DSC Resource Kit release will be on Wednesday, April 3.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don’t forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

Please see our documentation here for information on the support of these resource modules.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or CHANGELOG.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module Name Version Release Notes
ActiveDirectoryCSDsc 3.2.0.0
  • Added “DscResourcesToExport” to manifest to improve information in PowerShell Gallery – fixes Issue 68.
  • Removed unused CAType variables and references in AdcsOnlineResponder – fixes issue 52.
  • Updated Examples to enable publising to PowerShell Gallery – fixes issue 54.
  • Cleaned up property alignment in module manifest file.
  • Added new resource AdcsOcspExtension – see Issue 70.
    • Added new ActiveDirectoryCSDsc.CommonHelper.psm1 helper module and unit test.
    • Added stub function to /Tests/TestHelpers (ADCSStub.psm1) so Pester tests can run without having to install ADCSAdministration module.
  • Converted module to auto-documentation Wiki – fixes Issue 53.
  • Enabled Example publishing to PSGallery.
  • Moved change log to CHANGELOG.MD.
  • Opted into Common Tests “Validate Example Files To Be Published”, “Validate Markdown Links” and “Relative Path Length”.
  • Correct AppVeyor Invoke-AppveyorAfterTestTask – fixes Issue 73.
CertificateDsc 4.4.0.0
  • Minor style corrections from PR for Issue 161 that were missed.
  • Opt-in to Example publishing to PowerShell Gallery – fixes Issue 177.
  • Changed Test-CertificateAuthority to return the template name if it finds the display name of the template in the certificate -fixes Issue 147.
ComputerManagementDsc 6.2.0.0
  • WindowsEventLog:
    • Migrated the xWinEventLog from xWinEventLog and renamed to WindowsEventLog.
    • Moved strings in localization file.
    • LogMode is now set with Limit-EventLog,
    • Fixes Issue 18.
DFSDsc 4.3.0.0
  • Fixes PSSA style violation issues resulting – fixes Issue 84.
  • Added “DscResourcesToExport” to manifest to improve information in PowerShell Gallery – fixes Issue 86.
  • Set FunctionsToExport, CmdletsToExport, VariablesToExport, AliasesToExport to empty list in manifest to meet best practice.
  • Explicitly removed extra hidden files from release package
NetworkingDsc 7.0.0.0
  • Refactored module folder structure to move resource to root folder of repository and remove test harness – fixes Issue 372.
  • Removed module conflict tests because only required for harness style modules.
  • Opted into Common Tests “Validate Example Files To Be Published”, “Validate Markdown Links” and “Relative Path Length”.
  • Added “DscResourcesToExport” to manifest to improve information in PowerShell Gallery and removed wildcards from “FunctionsToExport”, “CmdletsToExport”, “VariablesToExport” and “AliasesToExport” – fixes Issue 376.
  • MSFT_NetIPInterface:
    • Added Dhcp, WeakHostReceive and WeakHostSend parameters so that MSFT_DHCPClient, MSFT_WeakHostReceive, MSFT_WeakHostSend can be deprecated – fixes Issue 360.
  • MSFT_DhcpClient:
    • BREAKING CHANGE: Resource has been deprecated and replaced by Dhcp parameter in MSFT_NetIPInterface.
  • MSFT_WeakHostReceive:
    • BREAKING CHANGE: Resource has been deprecated and replaced by WeakHostReceive parameter in MSFT_NetIPInterface.
  • MSFT_WeakHostSend:
    • BREAKING CHANGE: Resource has been deprecated and replaced by WeakHostSend parameter in MSFT_NetIPInterface.
  • MSFT_IPAddress:
    • Updated examples to use NetIPInterface.
  • MSFT_NetAdapterName:
    • Updated examples to use NetIPInterface.
  • MSFT_DnsServerAddress:
    • Updated examples to use NetIPInterface.
  • MSFT_NetworkTeam:
    • Change Get-TargetResource to return actual TeamMembers if network team exists and “Ensure” returns “Present” even when actual TeamMembers do not match “TeamMembers” parameter – fixes Issue 342.
  • Updated examples to format required for publishing to PowerShell Gallery – fixes Issue 374.
  • MSFT_NetAdapterAdvancedProperty:
  • Fixes NetworkAdapterName being returned in Name property when calling Get-TargetResourceFixes – fixes Issue 370.
PSDscResources 2.10.0.0
  • Fixed CompanyName typo – Fixes Issue 100
  • Update LICENSE file to match the Microsoft Open Source Team standard – Fixes Issue 120.
  • Update CommonResourceHelper unit tests to meet Pester 4.0.0 standards (issue 129).
  • Update ResourceHelper unit tests to meet Pester 4.0.0 standards (issue 129).
  • Ported fixes from xPSDesiredStateConfiguration:
    • xArchive
      • Fix end-to-end tests.
      • Update integration tests to meet Pester 4.0.0 standards.
      • Update end-to-end tests to meet Pester 4.0.0 standards.
      • Update unit and integration tests to meet Pester 4.0.0 standards.
      • Wrapped all path and identifier strings in verbose messages with quotes to make it easier to identify the limit of the string when debugging.
      • Refactored date/time checksum code to improve testability and ensure tests can run on machines with localized datetime formats that are not US.
      • Fix “Get-ArchiveEntryLastWriteTime” to return [datetime].
      • Improved verbose logging to make debugging path issues easier.
  • Added .gitattributes file to ensure CRLF settings are configured correctly for the repository.
  • Updated “.vscodesettings.json” to refer to AnalyzerSettings.psd1 so that custom syntax problems are highlighted in Visual Studio Code.
  • Fixed style guideline violations in CommonResourceHelper.psm1.
  • Updated “appveyor.yml” to meet more recent standards.
  • Removed OS image version from “appveyor.yml” to use default image (Issue 127).
  • Removed code to install WMF5.1 from “appveyor.yml” because it is already installed in AppVeyor images (Issue 128).
  • Removed .vscode from .gitignore so that Visual Studio code environment settings can be committed.
  • Environment
    • Update tests to meet Pester 4.0.0 standards (issue 129).
  • Group
    • Update tests to meet Pester 4.0.0 standards (issue 129).
    • Fix unit tests to run on Nano Server.
    • Refactored unit tests to enclude Context fixtures and change functions to Describe fixtures.
  • GroupSet
    • Update tests to meet Pester 4.0.0 standards (issue 129).
SharePointDsc 3.2.0.0
  • Changes to SharePointDsc unit testing
    • Implemented Strict Mode version 1 for all code run during unit tests.
    • Changed InstallAccount into PSDscRunAsCredential parameter in examples
  • SPAuthenticationRealm
    • New resource for setting farm authentication realm
  • SPConfigWizard
    • Updated PSConfig parameters according recommendations in blog post of Stefan Gossner
  • SPDistributedCacheService
    • Fixed exception on Stop-SPServiceInstance with SharePoint 2019
  • SPFarm
    • Improved logging
    • Added ability to manage the Developer Dashboard settings
  • SPFarmSolution
    • Fixed issue where uninstalling a solution would not work as expected if it contained web application resources.
  • SPIncomingEmailSettings
    • New resource for configuring incoming email settings
  • SPInstallPrereqs
    • Improved logging
    • Corrected detection for Windows Server 2019
    • Corrected support for Windows Server 2019 for SharePoint 2016
  • SPProductUpgrade
    • Fixed issue where upgrading SP2013 would not properly detect the installed version
    • Fixed issue where the localized SharePoint 2019 CU was detected as a Service Pack for a Language Pack
  • SPSearchAuthorativePage
    • Fixed issue where modifying search query would not target the correct search application
  • SPSearchResultSource
    • Updated resource to allow localized ProviderTypes
  • SPServiceAppSecurity
    • Updated resource to allow localized permission levels
  • SPServiceInstance
    • Added -All switch to resolve ‘Unable to locate service application’ in SP2013
  • SPSite
    • Improved logging
  • SPUserProfileProperty
    • Fix user profile property mappings does not work
  • SPUserProfileServiceApp
    • Added warning message when MySiteHostLocation is not specified. This is currently not required, which results in an error. Will be corrected in SPDsc v4.0 (is a breaking change).
  • SPUserProfileSyncConnection
    • Fixed issue where test resource never would return true for any configurations on SharePoint 2016/2019
    • Fixed issue where updating existing connection never would work for any configurations on SharePoint 2016/2019
    • Updated documentation to reflect that Fore will not impact configurations for SharePoint 2016/2019. Updated the test method accordingly.
  • SPUserProfileSyncService
    • Fixed issue where failure to configure the sync service would not throw error
  • SPWebAppPeoplePickerSettings
    • Converted password for access account to secure string. Previsouly the resource would fail setting the password and an exeption was thrown that printed the password in clear text.
  • SPWebAppPolicy
    • Fixed issue where parameter MembersToExclude did not work as expected
  • SPWorkflowService
    • Added support for specifying scope name.
    • Added support for detecting incorrect configuration for scope name and WorkflowHostUri
SqlServerDsc 12.3.0.0
  • Changes to SqlServerDsc
    • Reverting the change that was made as part of the issue 1260 in the previous release, as it only mitigated the issue, it did not solve the issue.
    • Removed the container testing since that broke the integration tests, possible due to using excessive amount of memory on the AppVeyor build worker. This will make the unit tests to take a bit longer to run (issue 1260).
    • The unit tests and the integration tests are now run in two separate build workers in AppVeyor. One build worker runs the integration tests, while a second build worker runs the unit tests. The build workers runs in parallel on paid accounts, but sequentially on free accounts (issue 1260).
    • Clean up error handling in some of the integration tests that was part of a workaround for a bug in Pester. The bug is resolved, and the error handling is not again built into Pester.
    • Speeding up the AppVeyor tests by splitting the common tests in a separate build job.
    • Updated the appveyor.yml to have the correct build step, and also correct run the build step only in one of the jobs.
    • Update integration tests to use the new integration test template.
    • Added SqlAgentOperator resource.
  • Changes to SqlServiceAccount
    • Fixed Get-ServiceObject when searching for Integration Services service. Unlike the rest of SQL Server services, the Integration Services service cannot be instanced, however you can have multiple versions installed. Get-Service object would return the correct service name that you are looking for, but it appends the version number at the end. Added parameter VersionNumber so the search would return the correct service name.
    • Added code to allow for using Managed Service Accounts.
    • Now the correct service type string value is returned by the function Get-TargetResource. Previously one value was passed in as a parameter (e.g. DatabaseEngine), but a different string value as returned (e.g. SqlServer). Now Get-TargetResource return the same values that can be passed as values in the parameter ServiceType (issue 981).
  • Changes to SqlServerLogin
    • Fixed issue in Test-TargetResource to valid password on disabled accounts (issue 915).
    • Now when adding a login of type SqlLogin, and the SQL Server login mode is set to "Integrated", an error is correctly thrown (issue 1179).
  • Changes to SqlSetup
    • Updated the integration test to stop the named instance while installing the other instances to mitigate issue 1260.
    • Add parameters to configure the Tempdb files during the installation of the instance. The new parameters are SqlTempdbFileCount, SqlTempdbFileSize, SqlTempdbFileGrowth, SqlTempdbLogFileSize and SqlTempdbLogFileGrowth (issue 1167).
  • Changes to SqlServerEndpoint
StorageDsc 4.5.0.0
  • Opt-in to Example publishing to PowerShell Gallery – fixes Issue 186.
  • DiskAccessPath:
    • Updated the resource to not assign a drive letter by default when adding a disk access path. Adding a Set-Partition -NoDefaultDriveLetter $NoDefaultDriveLetter block defaulting to true. When adding access paths the disks will no longer have drive letters automatically assigned on next reboot which is the desired behavior – Fixes Issue 145.
xActiveDirectory 2.24.0.0
  • Added parameter to xADDomainController to support InstallationMediaPath (issue 108).
  • Updated xADDomainController schema to be standard and provide Descriptions.
xExchange 1.27.0.0
  • Added additional parameters to the MSFT_xExchTransportService resource
  • Added additional parameters to the MSFT_xExchEcpVirtualDirectory resource
  • Added additional unit tests to the MSFT_xExchAutodiscoverVirutalDirectory resource
  • Added additional parameters to the MSFT_xExchExchangeCertificate resource
  • MSFT_xExchMailboxDatabase: Fixes issue with DataMoveReplicationConstraint parameter (401)
  • Added additional parameters and comment based help to the MSFT_xExchMailboxDatabase resource
  • Move code that sets $global:DSCMachineStatus into a dedicated helper function. Issue 407
  • Add missing parameters for xExchMailboxDatabaseCopy, adds comment based help, and adds remaining Unit tests.
xHyper-V 3.16.0.0
  • MSFT_xVMHyperV:
    • Moved localization string data to own file.
    • Fixed code styling issues.
    • Fixed bug where StartupMemory was not evaluated in Test-TargetResource.
    • Redo of abandoned PRs:
    • Fixed Get throws error when NetworkAdapters are not attached or missing properties.
xPSDesiredStateConfiguration 8.5.0.0
  • Pull server module publishing
    • Removed forced verbose logging from CreateZipFromSource, Publish-DSCModulesAndMof and Publish-MOFToPullServer as it polluted the console
  • Corrected GitHub Pull Request template to remove referral to BestPractices.MD which has been combined into StyleGuidelines.md (issue 520).
  • xWindowsOptionalFeature
    • Suppress useless verbose output from Import-Module cmdlet. (issue 453).
  • Changes to xRemoteFile
    • Corrected a resource name in the example xRemoteFile_DownloadFileConfig.ps1
  • Fix MSFT_xDSCWebService to find Microsoft.Powershell.DesiredStateConfiguration.Service.Resources.dll when server is configured with pt-BR Locales (issue 284).
  • Changes to xDSCWebService
    • Fixed an issue which prevented the removal of the IIS Application Pool created during deployment of an DSC Pull Server instance. (issue 464)
    • Fixed an issue where a Pull Server cannot be deployed on a machine when IIS Express is installed aside a full blown IIS (issue 191)
  • Update CommonResourceHelper unit tests to meet Pester 4.0.0 standards (issue 473).
  • Update ResourceHelper unit tests to meet Pester 4.0.0 standards (issue 473).
  • Update MSFT_xDSCWebService unit tests to meet Pester 4.0.0 standards (issue 473).
  • Update MSFT_xDSCWebService integration tests to meet Pester 4.0.0 standards (issue 473).
  • Refactored MSFT_xDSCWebService integration tests to meet current standards and to use Pester TestDrive.
  • xArchive
    • Fix end-to-end tests (issue 457).
    • Update integration tests to meet Pester 4.0.0 standards.
    • Update end-to-end tests to meet Pester 4.0.0 standards.
    • Update unit and integration tests to meet Pester 4.0.0 standards.
    • Wrapped all path and identifier strings in verbose messages with quotes to make it easier to identify the limit of the string when debugging.
    • Refactored date/time checksum code to improve testability and ensure tests can run on machines with localized datetime formats that are not US.
    • Fix “Get-ArchiveEntryLastWriteTime” to return [datetime] (issue 471).
    • Improved verbose logging to make debugging path issues easier.
    • Added handling for “/” as a path seperator by backporting code from PSDscResources – (issue 469).
    • Copied unit tests from PSDscResources.
    • Added .gitattributes file and removed git configuration from AppVeyor to ensure CRLF settings are configured correctly for the repository.
  • Updated “.vscodesettings.json” to refer to AnalyzerSettings.psd1 so that custom syntax problems are highlighted in Visual Studio Code.
  • Fixed style guideline violations in CommonResourceHelper.psm1.
  • Changes to xService
    • Fixes issue where Get-TargetResource or Test-TargetResource will throw an exception if the target service is configured with a non-existent dependency.
    • Refactored Get-TargetResource Unit tests.
  • Changes to xPackage
    • Fixes an issue where incorrect verbose output was displayed if product found. (issue 446)
  • Fixes files which are getting triggered for re-encoding after recent pull request (possibly 472).
  • Moves version and change history from README.MD to new file, CHANGELOG.MD.
  • Fixes markdown issues in README.MD and HighQualityResourceModulePlan.md.
  • Opted in to “Common Tests – Validate Markdown Files”
  • Changes to xPSDesiredStateConfiguration
    • In AppVeyor CI the tests are split into three separate jobs, and also run tests on two different build worker images (Windows Server 2012R2 and Windows Server 2016). The common tests are only run on the Windows Server 2016 build worker image. Helps with issue 477.
  • xGroup
    • Corrected style guideline violations. (issue 485)
  • xWindowsProcess
    • Corrected style guideline violations. (issue 496)
  • Changes to PSWSIISEndpoint.psm1
    • Fixes most PSScriptAnalyzer issues.
  • Changes to xRegistry
    • Fixed an issue that fails to remove reg key when the Key is specified as common registry path. (issue 444)
  • Changes to xService
    • Added support for Group Managed Service Accounts
  • Adds new Integration tests for MSFT_xDSCWebService and removes old Integration test file, MSFT_xDSCWebService.xxx.ps1.
  • xRegistry
    • Corrected style guideline violations. (issue 489)
  • Fix script analyzer issues in UseSecurityBestPractices.psm1. issue 483
  • Fixes script analyzer issues in xEnvironmentResource. issue 484
  • Fixes script analyzer issues in MSFT_xMsiPackage.psm1. issue 486
  • Fixes script analyzer issues in MSFT_xPackageResource.psm1. issue 487
  • Adds spaces between variable types and variables, and changes Type Accelerators to Fully Qualified Type Names on affected code.
  • Fixes script analyzer issues in MSFT_xPSSessionConfiguration.psm1 and convert Type Accelerators to Fully Qualified Type Names issue 488.
  • Adds spaces between array members.
  • Fixes script analyzer issues in MSFT_xRemoteFile.psm1 and correct general style violations. (issue 490)
  • Remove unnecessary whitespace from line endings.
  • Add statement to README.md regarding the lack of testing of this module with PowerShell 4 issue 522.
  • Fixes script analyzer issues in MSFT_xWindowsOptionalFeature.psm1 and correct general style violations. issue 494)
  • Fixes script analyzer issues in MSFT_xRemoteFile.psm1 missed from issue 490.
  • Fix script analyzer issues in MSFT_xWindowsFeature.psm1. issue 493
  • Fix script analyzer issues in MSFT_xUserResource.psm1. issue 492
  • Moves calls to set $global:DSCMachineStatus = 1 into a helper function to reduce the number of locations where we need to suppress PSScriptAnalyzer rules PSAvoidGlobalVars and PSUseDeclaredVarsMoreThanAssignments.
  • Adds spaces between comment hashtags and comments.
  • Fixes script analyzer issues in MSFT_xServiceResource.psm1. issue 491
  • Fixes script analyzer issues in MSFT_xWindowsPackageCab.psm1. issue 495
  • xFileUpload:
    • Fixes script analyzer issues in xFileUpload.schema.psm1. issue 497
    • Update to meet style guidelines.
    • Added Integration tests.
    • Updated manifest Author, Company and Copyright to match standards.
  • Updated module manifest Copyright to match standards and remove year.
  • Auto-formatted the module manifest to improve layout.
  • Fix Run-On Words in README.md.
  • Changes to xPackage
    • Fix an misnamed variable that causes an error during error message output. issue 449)
  • Fixes script analyzer issues in MSFT_xPSSessionConfiguration.psm1. issue 566
  • Fixes script analyzer issues in xGroupSet.schema.psm1. issue 498
  • Fixes script analyzer issues in xProcessSet.schema.psm1. issue 499
  • Fixes script analyzer issues in xServiceSet.schema.psm1. issue 500
  • Fixes script analyzer issues in xWindowsFeatureSet.schema.psm1. issue 501
  • Fixes script analyzer issues in xWindowsOptionalFeatureSet.schema.psm1 issue 502
  • Updates Should statements in Pester tests to use dashes before parameters.
  • Added a CODE_OF_CONDUCT.md with the same content as in the README.md issue 562
  • Replaces Type Accelerators with fully qualified type names.
xWebAdministration 2.5.0.0
  • Added SiteId to xWebSite to address [396]
  • xWebSite: Full path is used to get list of default documents
  • xIISLogging: Added support for LogTargetW3C
  • xWebsite: Added support for LogTargetW3C

How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module’s name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available starting in WMF 5.0) to find modules with DSC Resources:

<span class="pl-c"><span class="pl-c">#</span> To list all modules that tagged as DSCResourceKit</span>
<span class="pl-c1">Find-Module</span> <span class="pl-k">-</span>Tag DSCResourceKit 
<span class="pl-c"><span class="pl-c">#</span> To list all DSC resources from all sources </span>
<span class="pl-c1">Find-DscResource</span>

Please note only those modules released by the PowerShell Team are currently considered part of the ‘DSC Resource Kit’ regardless of the presence of the ‘DSC Resource Kit’ tag in the PowerShell Gallery.

To find a specific module, go directly to its URL on the PowerShell Gallery:
http://www.powershellgallery.com/packages/< module name >
For example:
http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

<span class="pl-c1">Install-Module</span> <span class="pl-k">-</span>Name <span class="pl-k">&lt;</span> module name <span class="pl-k">&gt;</span>

For example:

<span class="pl-c1">Install-Module</span> <span class="pl-k">-</span>Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

<span class="pl-c1">Update-Module</span>

After installing modules, you can discover all DSC resources available to your local system with this command:

<span class="pl-c1">Get-DscResource</span>

How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:
https://github.com/PowerShell/< module name >
For example, for the CertificateDsc module, go to:
https://github.com/PowerShell/CertificateDsc.

All DSC modules are also listed as submodules of the DscResources repository in the DscResources folder and the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:
https://github.com/PowerShell/< module name >/issues
For example:
https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Kragenbrink
Software Engineer
PowerShell DSC Team
@katiedsc (Twitter)
@kwirkykat (GitHub)

The post DSC Resource Kit Release February 2019 appeared first on PowerShell.

Viewing all 5932 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>