Quantcast
Channel: Category Name
Viewing all 5932 articles
Browse latest View live

Phishing with the Sharks Using the Attack Simulator

$
0
0

Hello, Paul Bergson back again. It is late fall and once again playoff time for High School and Collegiate volleyball. Women’s volleyball in Minnesota is a big deal and I have played and coached for over 30 years and I have a lot of great memories with my friends and family in this sport. One thing I have learned is teaching young athletes to be well rounded in the game. Many become focused on the offensive part of the game and won’t put the effort to learn how to become a skilled defender. Yet they don’t seem to understand if you can’t control the ball defensively you don’t get to set up an attack against your opponent.

I see this same sort of mentality when it comes to preparation for the defense against phishing attacks. There are technical measures that can be put in place to guard against malware which includes Phishing attacks but the last line of defense against Phishing is your user base.

Preparing your users to be on the lookout for phishing attacks is difficult to do. Most figure their job isn’t very glamorous, and no one would want to target them. Yet, the largest attack vector isn’t software flaws but is instead the human factor. Email Phishing attacks randomly target millions of users and targeted spear-phishing attacks focus on high value assets within the company. Spear-phishing attacks are more effective and much harder to detect with “roughly 75% of all company breaches now start with phishing attempts designed to steal user credentials.”
https://blogs.technet.microsoft.com/cloudready/2018/04/30/phishing-examples-for-the-microsoft-office-365-attack-simulator-part-one/

Look at that number, 75%! With a number that large it makes it easier for IT decision makers to justify budget expense requests, to their management, to protect the enterprise infrastructure. So what type of equipment is needed to protect against “Phishing” attacks? No physical equipment is needed! Only annual or more frequent “User Training”, along with ongoing tests to ensure users are following training guidance.

What about email and spam filters, don’t those protect the enterprise? The answer is yes, but like anything else phishing/spear-phishing attacks evolve and some of this email still lands in your user’s inboxes.
https://blogs.msdn.microsoft.com/tzink/2014/09/12/why-does-spam-and-phishing-get-through-office-365-and-what-can-be-done-about-it/

It is at this point; your users are your last line of defense. Awareness and training could be the difference that saves your enterprise from attackers getting a foot hold within the company and the opportunity to pivot from this compromised workstation. If your user’s have been trained to spot a phishing attack (or watering hole) they can stop the attack in the “Kill Chain”.
https://blogs.technet.microsoft.com/prasadpatil/2017/12/15/crippling-the-cyber-kill-chain/

Training your users on how to spot an attack is based on not trusting people or organizations your users aren’t familiar with and to ensure the information provided within an email is legitimate.

Once training has been completed it can be crucial to be aware of the level of understanding your users have on this threat. This awareness can be performed with a Phishing awareness assessment. An awareness assessment creates a phishing simulation to see which users fall victim, which don’t fall victim and those that don’t fall victim and report the attack. The details that are pulled from this assessment can then be used to help retrain those users that fell victim.

An awareness assessment can be created manually but that can be difficult, there are third party tools and vendors that can provide this service, but Office 365’s Threat Intelligence service recently released a new enhanced feature called “Attack Simulator”. Attack Simulator has three options available:

  • Spear-phishing user testing
  • Password spray attack
  • Brute-force password attack

In order to use Attack Simulator, there are several prerequisites:

  • The enterprise owns O365 Threat Intelligence (E5 Licensing) or have purchased Threat Intelligence separately
  • Exchange Online is in use (On premise is not supported)
  • Multi-Factor Authentication (MFA) for O365 is enabled and used by the account running the Attack Simulator

The only users capable of using the Attack Simulator feature are O365 Global Administrators or someone that has been delegated the “Security Administrator” role.

Prior to running an in-house phishing attack, ensure to get leadership approval, since this could be considered a hostile act even if it is just a simulation.

The O365 team has created a number of scenarios to help our users create a targeted attack. Along with the links provided below I have also included a short video on the console:

I still recall an eventual DII collegiate player on my team, good enough to help the team offensively but a detriment defensively sitting on the bench as we were playing for a berth in the state tournament. Sadly, she never got an opportunity to play in the tournament run. The following year she finally realized that there were three components to the game – Bump, Set and Spike. Think of this as the volleyball “Kill Chain” (Lockheed Martin framework) which opponents will leverage to their advantage.

If you have access to the Attack Simulator, don’t “Sit on the bench” figuring it isn’t important. It is!!! Use this tool to help educate and protect your enterprise from Phishing attacks as well as Password Spray and Brute Force attacks.

“Go, Go Gophers!”


Microsoft is a Leader in The Forrester Wave™: Unified Endpoint Management, Q4 2018

$
0
0

Microsoft is excited to announce that we are named a Leader for Enterprise Mobility + Security (EMS) in the inaugural Forrester Wave: Unified Endpoint Management, Q4 2018. Forrester notes in the report that, Microsoft’s release of co-management in late 2017 has bolstered the company’s ability to serve advanced Windows 10 management use cases and provides a flexible path for customers to test out modern management. Forrester also recognizes Microsoft for having the some of the strongest security capabilities in the evaluation of 12 vendors.

 

 Forrester Wave Leader badge.png

 

We are honored and humbled by the recognition from both customers and the industry, demonstrated by the leadership position in other major analyst reports this year. It is not hard to see why customers have embraced Microsoft EMS as the most complete, intelligent solution for the security and management of their Office 365, Windows 10, and mobile endpoints.

 

Connect what you have to the cloud and shift to modern management: We hear from our customers that they love the ability to add Microsoft Intune to their existing PC management infrastructure and benefit immediately from the scale, reliability, and security of cloud. IT professionals can build on the strong foundation they already have with System Center Configuration Manager (ConfigMgr), add the intelligence from the Microsoft Cloud, and get instant new value and capabilities. We have engineered Intune and ConfigMgr to work together, and the licenses for ConfigMgr are included in your Intune subscription at no extra cost! Using co-management for select workloads enables customers to move to cloud-based, modern management practices at their own pace. It does not require you to make any other changes to your setup – you can continue domain joining and managing PC’s using ConfigMgr for other workloads for as long as you need. You get the best management experience for PC and mobile, leveraging MDM APIs, automation, and conditional access where possible, and executing other workloads such as patching and software distribution with traditional tools.

 

Using the intelligent cloud to help guide decision-making: With increasingly sophisticated attacks and multiple new attack surfaces, it is not feasible to manage and protect company data using human intelligence alone. Windows administrators can soon leverage the machine learning of the Microsoft cloud in order to set security policies. We are pleased to publish a set of Microsoft recommended security baselines in the Intune service that leverage the greatly expanded manageability of Windows 10 using Mobile Device Management (MDM). These security baselines will be managed and updated directly from the cloud – providing customers the most recent and most advanced security settings and capabilities available from Microsoft 365. If you're brand new to Microsoft, and not sure where to start, then security baselines give you an advantage. You can quickly create and deploy a secure profile to help protect your organization's resources and data. If you're currently using Group Policy, migrating to Intune for management is much easier with these baselines natively built into Intune's modern management platform. For application upgrade readiness, the upcoming Desktop Analytics service will combine data from your own organization with data aggregated from millions of devices connected to our cloud services, and take the guess work out of testing application compatibility.  ConfigMgr administrators can leverage data from Desktop Analytics in several ways, including enablement of an intelligent pilot selection which ensures coverage of apps, add-ins and hardware, as well as deep integration with Phased Deployments for a data driven production rollout of task sequences, updates and applications.

 

Machine risk-based conditional access with threat protection: Integration between Windows Defender ATP and Azure Active Directory conditional access through Microsoft Intune ensures that attackers are immediately prevented from gaining access to sensitive corporate data, even if attackers manage to establish a foothold on networks. When Windows Defender ATP triggers a device risk alert during an attack, the affected devices are marked as being at high risk. Conditional access immediately uses this risk score to restrict access from these devices to corporate services and data managed by Azure Active Directory. When the threat is remediated, Windows Defender ATP drops the device risk score, and the device regains access to resources. Similar integration capabilities are offered for mobile devices through security partners such as Lookout, Zimperium, Checkpoint, Symantec, Pradeo, Better Mobile, and Google Play Protect. As noted by Forrester, Microsoft “EMS has some of the strongest security capabilities in this evaluation, including native vulnerability management on Windows 10, file-level encryption, data-loss prevention (DLP), and malicious app behavior detection”.

 

You can read the in-depth analysis from Forrester here

 

This series has other examples of organizations using Microsoft to secure their extended IT ecosystem for end-to-end protection across users, devices, apps, and data. We encourage you to visit the Microsoft Secure site and learn more about the full scope of Microsoft 365 Security capabilities. Also, check out more customer stories to learn how organizations leverage Microsoft 365 Security.

 

Visit the new home for Microsoft Enterprise Mobility + Security blogs and join the Tech Community if you haven’t signed up already. Here are some other resources where you can learn more:

 

 

 

 

Networking in Red Hat OpenShift for Windows

$
0
0

Hello again,

Today we will be drilling into a more complex topic following the introduction to Red Hat OpenShift for Windows on premises two weeks ago. We will expand into the networking layer of the architecture that we have chosen for the current developer previews.

You may ask yourself “Why do I care about how networking works?”
The obvious answer would be “Without it your container cannot listen or talk much to others.”
What do I mean by that; networking is the backbone of any IT infrastructure and container deployments are no different from that. The various networking components allow communication of containers, nodes, pods, clusters amongst each other and the outside world.

As a DevOps you will need to have a core understanding of the networking infrastructure pieces that are deployed in your container infrastructure and how they interact, be it bare-metal, VMs on a virtualization host or in one of the many cloud services provided so you can tailor the network setup to your needs.

Terminology

First let’s cover a few buzzwords, TLAs and other complex things so we are all on the same page

Terminology Description
CNI Container Networking Interface, a specification of a standardized interface defining the container endpoint and its interaction with the node the container runs on.
Docker A popular container runtime.
vSwitch Virtual Switch, the central component in container networking. Every container host has one. It serves up the basic connectivity for each container endpoint. On the Linux side it resembles somewhat to a Linux Bridge.
NAT Network Address Translation. A way to isolate private IP address spaces across multiple hosts and nodes behind a public IP Address space
Pod the smallest atomic unit in a Kubernetes Cluster. A Pod can host one or more containers. All Containers in a pod share the same IP address
Node An infrastructure component hosting one or more pods.
Cluster An infrastructure component comprised of multiple nodes.
HNS Host Network Service, a windows component interacting with the networking aspects of the Windows container infrastructure
HCS Host Compute Service, a Windows component supporting the interactions of the container runtime with the rest of the operating system
OVN Open Virtual Network. OVN provides network virtualization to containers. In the “overlay” mode, OVN can create a logical network amongst containers running on multiple hosts. In this mode, OVN programs the Open vSwitch instances running inside your hosts. These hosts can be bare-metal machines or vanilla VMs. OVN uses two data stores the Northbound (OVN-NB) and the Southbound  (OVN-SB) data store.
 ovn-northbound
  • OpenStack/CMS integration point
  • High-level, desired state
    • Logical ports -> logical switches -> logical routers

ovn-southbound

  • Run-time state
  • Location of logical ports
  • Location of physical endpoints
  • Logical pipeline generated based on configured and run-time state
OVS Open Virtual Switch. Open vSwitch is well suited to function as a virtual switch in VM environments. In addition to exposing standard control and visibility interfaces to the virtual networking layer, it was designed to support distribution across multiple physical servers.

Here is how all these components fit into the architecture on the Windows worker node. I will talk more about them through out the post.

a block diagram depicting the componets and their layers and relationship based on the table above

OpenShift for Windows Networking components

OK, now that we are on the same page let’s dive in.

Setup

To recap from the last post, we will have a Linux Red Hat OpenShift Master node which also serves as the Kubernetes Master and a Windows Server Core Worker node which is joined to the Master. The deployment will also use the Docker container runtime on both the Linux and the Windows Node to instantiate and execute the containers.
You can deploy the nodes in one VM host, across multiple VM hosts, bare metal and also deploy more than two nodes in this environment. For the purpose of this discussion we have deployed a separate VM host and will use it to host both the Linux and the Windows Node.
Next lets dig into the networking and how the networks are created and how the traffic flows.

Networking Architecture

The image below shows the networking architecture in more detail and zooms into the above picture both on the Linux node and the Windows node.
Looking at the diagram below we can see that there are several components making up the networking layer

A block diagram depicting the two node architecture for the developer preview of OpenShift for Windows

OpenShift for Windows Networking Architecture

The components can be grouped into several groups:

  • Parts which are Open Source components (light orange)
  • Parts which are in the core Windows Operating System (bright blue).
  • Parts which are Open Source and Microsoft made specific changes to the code and shared them with the community (light blue).

On the Linux side Open Source Components are the container runtime like the Docker Engine, Kubernetes components such as

  • kube-proxy – (Kubernetes network proxy) which runs on each node and reflects services as defined in the Kubernetes API on each node for traffic forwarding across a set of backends.
  • kubelet – is the primary “node agent” that runs on each node. The kubelet works by reading a PodSpec object which is a YAML or JSON document that describes a pod.
  • To find out more about Kubernetes components on Linux check the Kubernetes documentation here.

On the Windows side some of these components like the kube-proxy and the kubelet have been enhanced by Microsoft to work with the Microsoft networking components such the Host Compute Service (HCS) and the Host Network Service (HNS). These changes are made to allow the interoperability with Windows core services and also the abstraction of the differences in the underlying architecture.

On the Windows side some of these components like the kube-proxy and the kubelet have been enhanced by Microsoft to work with the Microsoft networking components such the Host Compute Service (HCS) and the Host Network Service (HNS). These changes are made to allow the interoperability with Windows core services and the abstraction of the differences in the underlying architecture.

One of the differences between Linux Nodes and Windows Nodes in this system is the way the nodes are joined to the Kubernetes cluster. In Linux you would use a command like
kubeadm join 10.127.132.215:6443 –token <token> –discovery-token-ca-cert-hash <cert hash>

On Windows where the kubeadm command is not available the join is handled by the Host Compute Service when the resource is created.

The key takeaway of the discussion here is that overall the underlying architectural differences between Linux and Windows are abstracted and the process of setting up Kubernetes for Windows and managing the networking components of the environment is going to be straight forward and mostly familiar if you have done it on Linux before.
Also since Red Hat OpenShift calls into Kubernetes the administrative experience will be uniform across Windows and Linux Nodes.
That being said, be what we are discussing today is the architecture of the currently available developer preview. Microsoft and Red Hat are working to completed work to integrate the Windows CNI into the flow to replace OVN/OVS. We will keep the support for OVN/OVS and also add other CNI plugins as we progress but will switch to Windows CNI during the first half of 2019. So be on the lookout for an update on that.

To say it with a famous cartoon character of my childhood “That’s all folks!”

Thanks for reading this far and see you next time.

Mike Kostersitz

P.S.: If this post was too basic or too high-level. Stay tuned for a deeper dive into Windows Container Networking Architecture and troubleshooting common issues coming soon to this blog near you.

Editors Note: Fixed a typo

Networking in OpenShift for Windows

$
0
0

Hello again,

Today we will be drilling into a more complex topic following the introduction to Red Hat OpenShift for Windows on premises two weeks ago. We will expand into the networking layer of the architecture that we have chosen for the current developer previews.

You may ask yourself “Why do I care about how networking works?”
The obvious answer would be “Without it your container cannot listen or talk much to others.”
What do I mean by that; networking is the backbone of any IT infrastructure and container deployments are no different from that. The various networking components allow communication of containers, nodes, pods, clusters amongst each other and the outside world.

As a DevOps you will need to have a core understanding of the networking infrastructure pieces that are deployed in your container infrastructure and how they interact, be it bare-metal, VMs on a virtualization host or in one of the many cloud services provided so you can tailor the network setup to your needs.

Terminology

First let’s cover a few buzzwords, TLAs and other complex things so we are all on the same page

Terminology Description
CNI Container Networking Interface, a specification of a standardized interface defining the container endpoint and its interaction with the node the container runs on.
Docker A popular container runtime.
vSwitch Virtual Switch, the central component in container networking. Every container host has one. It serves up the basic connectivity for each container endpoint. On the Linux side it resembles somewhat to a Linux Bridge.
NAT Network Address Translation. A way to isolate private IP address spaces across multiple hosts and nodes behind a public IP Address space
Pod the smallest atomic unit in a Kubernetes Cluster. A Pod can host one or more containers. All Containers in a pod share the same IP address
Node An infrastructure component hosting one or more pods.
Cluster An infrastructure component comprised of multiple nodes.
HNS Host Network Service, a windows component interacting with the networking aspects of the Windows container infrastructure
HCS Host Compute Service, a Windows component supporting the interactions of the container runtime with the rest of the operating system
OVN Open Virtual Network. OVN provides network virtualization to containers. In the “overlay” mode, OVN can create a logical network amongst containers running on multiple hosts. In this mode, OVN programs the Open vSwitch instances running inside your hosts. These hosts can be bare-metal machines or vanilla VMs. OVN uses two data stores the Northbound (OVN-NB) and the Southbound  (OVN-SB) data store.
 ovn-northbound
  • OpenStack/CMS integration point
  • High-level, desired state
    • Logical ports -> logical switches -> logical routers

ovn-southbound

  • Run-time state
  • Location of logical ports
  • Location of physical endpoints
  • Logical pipeline generated based on configured and run-time state
OVS Open Virtual Switch. Open vSwitch is well suited to function as a virtual switch in VM environments. In addition to exposing standard control and visibility interfaces to the virtual networking layer, it was designed to support distribution across multiple physical servers.

Here is how all these components fit into the architecture on the Windows worker node. I will talk more about them through out the post.

a block diagram depicting the componets and their layers and relationship based on the table above

OpenShift for Windows Networking components

OK, now that we are on the same page let’s dive in.

Setup

To recap from the last post, we will have a Linux Red Hat OpenShift Master node which also serves as the Kubernetes Master and a Windows Server Core Worker node which is joined to the Master. The deployment will also use the Docker container runtime on both the Linux and the Windows Node to instantiate and execute the containers.
You can deploy the nodes in one VM host, across multiple VM hosts, bare metal and also deploy more than two nodes in this environment. For the purpose of this discussion we have deployed a separate VM host and will use it to host both the Linux and the Windows Node.
Next lets dig into the networking and how the networks are created and how the traffic flows.

Networking Architecture

The image below shows the networking architecture in more detail and zooms into the above picture both on the Linux node and the Windows node.
Looking at the diagram below we can see that there are several components making up the networking layer

A block diagram depicting the two node architecture for the developer preview of OpenShift for Windows

OpenShift for Windows Networking Architecture

The components can be grouped into several groups:

  • Parts which are Open Source components (light orange)
  • Parts which are in the core Windows Operating System (bright blue).
  • Parts which are Open Source and Microsoft made specific changes to the code and shared them with the community (light blue).

On the Linux side Open Source Components are the container runtime like the Docker Engine, Kubernetes components such as

  • kube-proxy – (Kubernetes network proxy) which runs on each node and reflects services as defined in the Kubernetes API on each node for traffic forwarding across a set of backends.
  • kubelet – is the primary “node agent” that runs on each node. The kubelet works by reading a PodSpec object which is a YAML or JSON document that describes a pod.
  • To find out more about Kubernetes components on Linux check the Kubernetes documentation here.

On the Windows side some of these components like the kube-proxy and the kubelet have been enhanced by Microsoft to work with the Microsoft networking components such the Host Compute Service (HCS) and the Host Network Service (HNS). These changes are made to allow the interoperability with Windows core services and also the abstraction of the differences in the underlying architecture.

On the Windows side some of these components like the kube-proxy and the kubelet have been enhanced by Microsoft to work with the Microsoft networking components such the Host Compute Service (HCS) and the Host Network Service (HNS). These changes are made to allow the interoperability with Windows core services and the abstraction of the differences in the underlying architecture.

One of the differences between Linux Nodes and Windows Nodes in this system is the way the nodes are joined to the Kubernetes cluster. In Linux you would use a command like
kubeadm join 10.127.132.215:6443 –token <token> –discovery-token-ca-cert-hash <cert hash>

On Windows where the kubeadm command is not available the join is handled by the Host Compute Service when the resource is created.

The key takeaway of the discussion here is that overall the underlying architectural differences between Linux and Windows are abstracted and the process of setting up Kubernetes for Windows and managing the networking components of the environment is going to be straight forward and mostly familiar if you have done it on Linux before.
Also since OpenShift calls into Kubernetes the administrative experience will be uniform across Windows and Linux Nodes.
That being said, be what we are discussing today is the architecture of the currently available developer preview. Microsoft and Red Hat are working to completed work to integrate the Windows CNI into the flow to replace OVN/OVS. We will keep the support for OVN/OVS and also add other CNI plugins as we progress but will switch to Windows CNI during the first half of 2019. So be on the lookout for an update on that.

To say it with a famous cartoon character of my childhood “That’s all folks!”

Thanks for reading this far and see you next time.

Mike Kostersitz

P.S.: If this post was too basic or too high-level. Stay tuned for a deeper dive into Windows Container Networking Architecture and troubleshooting common issues coming soon to this blog near you.

 

Using the Fully Qualified Domain Name for Remote Control in System Center Configuration Manager

$
0
0

Hello everyone, Jonathan Warnken here. I am a Premiere Field Engineer (PFE) for Microsoft. I primarily support Configuration Manager and today I want to talk about creating a custom console extension to allow the use of a Fully Qualified Domain Name (FQDN) when starting a remote-control session. If you work in a multi domain environment or need to support direct access clients you will quickly find that one of the challenges with Configuration Manager is that when it starts a remote control session it defaults to the Client name which generally matches the Net Bios name of the system. I was recently challenged by a customer wanting to simplify the management of clients connected via direct access. They had correctly configured the environment to support managing out to these devices and could complete all connected except starting remote control via the console. Remote control would work but the initial connection via the net bios name would fail and the user would need to enter the FQDN to allow the connection to complete successfully.

In all other cases when connecting to clients connected via direct access the operating system would append the correct DNS suffix. However, the Configuration Manager console would not. Starting the remote-control tool via the command line does support passing the FQDN andor the IP address. My first solution was to write a PowerShell script to take the computer name and look up the FQDN.

& $env:SMS_ADMIN_UI_PATHCmRcViewer.exe $([net.dns]::GetHostEntry(‘YourComputerName’).Hostname)

Simple and effective but the response was that it needed to be even simpler. So after a little digging in the AssetManagementNode.xml file, I saw that two nodes exposed the remote control options via a right click in the console. For more info on finding Console Nodes see https://docs.microsoft.com/sccm/develop/core/servers/console/how-to-find-a-configuration-manager-console-node-guid

With this info I decided to write a custom right click to run the PowerShell command. The first node started with was the devices view which has a GUID of “ed9dee86-eadd-4ac8-82a1-7234a4646e62”. After reading https://docs.microsoft.com/en-us/sccm/develop/core/servers/console/how-to-create-a-configuration-manager-action, I created a test action using the notepad example from the docs site and everything worked great so I made an xml file to execute the PowerShell command. And nothing happened! After some head scratching and a few expletives uttered, I realized that the ampersand (&) character is a special character for xml and must be escaped for it to be correctly parsed.

NOTE: You may need to zoom in a bit to see the screenshot. The XML content is copied below as well.

<ActionDescription Class=”Executable” DisplayName=”PFE FQDN Remote Control” MnemonicDisplayName=”PFE FQDN Remote Control” Description = “Use FQDN to start remote control session”>

<ShowOn>

<string>ContextMenu</string>

</ShowOn>

<Executable>

<FilePath>C:WindowsSystem32WindowsPowerShellv1.0powershell.exe</FilePath>

<Parameters>-nologo -noprofile -noninteractive -windowstyle hidden -ExecutionPolicy Bypass -Command “&amp; { &amp; $env:SMS_ADMIN_UI_PATHCmRcViewer.exe  $([net.dns]::GetHostEntry(‘##SUB:NAME##’).Hostname) }” </Parameters>

</Executable>

<SecurityConfiguration>

<ClassPermissions>

<ActionSecurityDescription RequiredPermissions=”32″ ClassObject=”SMS_Collection”/>

</ClassPermissions>

</SecurityConfiguration>

</ActionDescription>

 

You will also note that there is a special variable used to pass the client name. ##SUB:NAME## is the variable passed from the console to the custom action for the client name.

To use the custom action you will need to save the xml file in the XmlStorageExtensionsActions folder in path where the console is installed. Under the actions folder you will need to create a folder named the same as the GUID for the node that you would like the action to appear and save the file there. As I said earlier, there are two nodes I waned my extension to appear in. The other node I want to use is the devices in a collection node, for which the GUID is “3fd01cd1-9e01-461e-92cd-94866b8d1f39”.

With the xml in place the new right click tool is ready for action.

Hopefully, you will find this useful if you need to do something similar or need to write your own custom extension. If you would like to use mine the xml and a install script is available https://github.com/mrbodean/AskPFE/tree/master/ConfigMgr%20FQDN%20Remote%20Control/source/ed9dee86-eadd-4ac8-82a1-7234a4646e62

Thanks for reading

How does Microsoft Intune transform Android enterprise management? Let me count the ways

$
0
0

With Android Enterprise, Google raises the bar for management of mobile devices and services. Additional management capabilities and improved consistency across the Android ecosystem enable you to confidently deploy Android devices in your enterprise. From the enterprise mobility management (EMM) perspective, Android Enterprise replaces legacy Device Administration API (referred to as device admin in this article) to provide enhanced privacy, security, and management capabilities for company-owned and bring-your-own devices alike. Microsoft is one of the first EMM vendors to embrace Google’s cloud services architecture for Android Enterprise. Known as Android Management API, it streamlines design and deployment of management solutions to enable Intune to release the available platform features at a more consistent pace. Microsoft supports the Google recommendation that all partners and customers move off of device admin management, since Google has announced that they will be removing device admin capabilities in the near future. In this article, we explore the paths that Microsoft Intune customers may choose to plan their Android management.

 

How can Microsoft Intune simplify my transition to Android Enterprise?

 

Microsoft Intune offers flexible device management options for Android Enterprise so you can select the right management approach for different use cases and scenarios relevant to your organization. Typically, Android devices fall into two groups:

  1. personal devices used for work, also known as bring-your-own devices (BYOD), or
  2. company owned devices delivered by IT.

 

This simplified flowchart provides a high-level overview of the flexible alternatives. 

 

Android DA to AE Migration Paths - MJ edit.png

 

Some organizations allow employees to use the same device for personal use and work apps. Microsoft helps them deliver a great user experience that adapts to employees' individual work styles for the highest productivity, without compromising security. Organizations have a key stake in protecting any corporate data that is viewed or stored on personal devices in the form of emails, calendar, documents, and certain apps. Depending on your organizational needs, you may require enrollment of devices for access to work data or you may choose to manage corporate data and apps without enrollment of the device itself. For the former use-case, Intune supports Android Work Profile, which requires users to enroll and provides certain device-level controls for IT administrators. If you don’t need the device management capabilities, you may instead deploy Intune app protection policies (APP) that manage the corporate identities and protect corporate data on devices without enrollment  

 

For company owned devices, IT administrators can apply extensive policies with Microsoft Intune to configure the settings, security, and availability of apps and resources on the device. Intune supports the Android Enterprise dedicated device mode, designed for locked-down kiosk-style use cases where the device is not associated with a specific user identity. Dedicated device mode provides IT the ability to control the use of the keyboard, camera, push apps and updates, and restrict access to settings or other parts of the software in certain employee or customer-facing scenarios such as kiosks, digital signage, point-of-sale devices, and handhelds. Early next year, Intune will introduce the Android Enterprise fully managed capabilities for company owned devices, which give IT control over the device while leveraging identity-driven features such as conditional access policies, email and calendar support (including Microsoft Outlook for Android), personalization, and so on.

 

With any of these Android Enterprise device management modes, IT admins can take advantage of app lifecycle management features with Managed Google Play.  Managed Google Play provides a substantial set of improvements in app management compared to what is available with device admin.  Some of the benefits of using Managed Google Play for your corporate app store:

 

  • Push managed apps – deploy required/mandatory apps to users without requiring that they perform any steps. Deploy the app from the Intune console, and it will install automatically on the device
  • Unified app experience – the end user experience for apps is now the same regardless of whether you are managing an app in the public Play Store or a private line of business app.
  • Enhanced security – end users no longer have to enable installations from unknown sources to install apps. This is more secure than the earlier approach, and improves the end users experience.

 

Let’s dig a little deeper to understand which approach meets your organization needs.

Modern management of BYO devices 

 

Microsoft Intune supports two management modes for bring-your-own devices: Work profiles and Intune app ppolicies.

Work Profile management when users enroll their devices

Work profile mode is suitable for BYOD deployments where you require device level controls push deployed apps, device PIN code (at the device or work profile level), certificate management, or Wi-Fi and VPN configuration. In this mode, the end user initiates enrollment which creates a work profile on the device. This work profile is manageable by IT, and it sits alongside the user’s personal profile. The end user has complete privacy of personal apps and data, since they reside in a separate space from the IT managed work profile. IT has the ability to install certificates and install required apps in the work profile. The separation between apps in the personal profile and the corporate apps in the work profile is enforced at the OS level.

 

Learn more about how to set up enrollment of Work Profile devices and see the user flow for Work Profile enrollment. If you use Microsoft System Center Configuration Manager for hybrid mobile device management, while we support enablement of Work Profile enrollment in Configuration Manager, we do recommend that you look to move away from hybrid mobile device management instead. This will allow you to leverage all of Android Enterprise supported by Intune.   

 

Intune app protection policy (APP) management with or without device enrollment 

For scenarios where you do not require device level controls or have a set of users that may not enroll their devices for management, you can use Intune’s app protection policies to manage only the corporate identities and corporate data on a device without managing the device itself. This provides you with the data protection you require for your corporate data, but with the lightest touch and smallest management footprint on the device.  This capability is available across all releases of Android 4.4 and up and is not affected by the coming discontinuance of device admin management. By implementing app-level policies, you can prevent company data from saving to untrusted cloud storage locations (“Prevent Save As”) or from being shared to other apps that aren't protected by app protection policies (“Restrict cut, copy, and paste”). You can require a PIN to open an app in a work context, block managed apps from running on rooted devices, and selectively wipe company data from managed apps.

 

Learn how to create and assign app protection polices and review the specific Android settings. Intune app protection policies provide maximum device management flexibility by protecting your company’s data independent of any mobile-device management (MDM) solution, whether devices are enrolled with Intune, enrolled with a 3rd party MDM, or not enrolled in any MDM.

 

Modern management of corporate-owned devices 

 

Microsoft Intune supports several management modes for Android Enterprise corporate devices.

Android Enterprise dedicated device management

Dedicated device management for kiosk-type Android Enterprise devices is one of the fastest growing use-cases for Intune management, as it allows IT to enable kiosk-type scenarios to any Android Enterprise devices. In the past, this was restricted to device manufacturer specific extension to Android device admin management. IT admins lock down the usage of devices for a limited set of apps and web links and prevents users from adding other apps or taking other actions on the device. Devices that are managed in this way are enrolled in Intune without a user account and aren't associated with any end user. They're not intended for personalized applications or apps, such as Outlook or OneDrive, that inherit policies based on user identity. For specific employees and customer-facing scenarios, IT requires a robust solution where devices can be shipped thousands of miles away, be plugged in by line-of-business staff, and start working without any on-site technical support. With Intune, these devices are easy to provision, to push a set of apps and keep them updated, and configure remotely. Note that devices will need to be factory reset to be enrolled into this mode

 

If you are currently using the Samsung Knox settings for kiosk devices, you may transition to this method for Android Enterprise support.

 

Learn about the different enrollment methods available to set up Android kiosk-style devices and manage them remotely.

Android Enterprise fully managed device mode

The fully managed device mode is usually suitable for information worker devices that are provided by the company and associated with individual user identities. Device and app management capabilities in this mode exceed the current capabilities under an equivalent device admin mode. User-oriented features such as conditional access are available with this mode, and they are tailored for conventional productivity scenarios such as calls, messaging, email, app store access, and so on. With the addition of this capability, corporate device administrators will get to choose the extent of Android Enterprise management appropriate for different departments and users within the organization. Watch for the public preview rolling out soon.  

 

Shift with confidence to modern management

Now is the time to prepare your organization to adopt the higher security requirements and wider variety of use cases available in the Android Enterprise ecosystem. Microsoft offers a variety of resources and support tools to help you in this journey. Start by using Microsoft FastTrack to plan your cloud deployment; the service is included in most Microsoft subscriptions.

 

Customers with eligible subscriptions to Microsoft 365, Microsoft Enterprise Mobility + Security (EMS) or Microsoft Intune can use FastTrack at no additional cost for the life of their subscription. Whether you are a customer or a partner, FastTrack provides customized guidance for onboarding and adoption, including access to Microsoft engineering expertise, best practices, tools, and resources so you can leverage existing resources instead of creating new ones.

 

More info and feedback

Learn how to get started with Microsoft Intune with our detailed technical documentation. If you missed Microsoft Ignite, check out these excellent Android migration tips (video) by product managers Chris Baldwin and Saud Al-Mishari. 

 

Don’t have Microsoft Intune? Start a free trial or buy a subscription today!

 

As always, we want to hear from you! If you have any suggestions, questions, or comments, please visit us on our Tech Community page. Follow us on social media @MSIntune 

 

Rule your inbox with Microsoft Cloud App Security

$
0
0

Exploited accounts can be used for several malicious purposes including reading email in a user’s inbox, creating rules to forward future emails to external accounts, internal phishing campaigns to gain access to further inbox accounts, and creating malicious rules to help an attacker remain undetected.

 

As part of our ongoing research to analyze trends and attack techniques, the Microsoft Cloud App Security team was able to deploy two new detection methods to help tackle malicious activities against Exchange inbox accounts protected with Microsoft Cloud App Security. Since we’ve started rolling out these new detections, we are seeing more than 3,000 suspicious rule alerts each month./p>

 

1.pngImage 1: Built-in alerts for suspicious inbox rules

Malicious forwarding rules

Some email users, particularly those with multiple mailboxes, set forwarding rules to move corporate emails to their private email accounts. While seemingly harmless, this behavior is also a known method used by attackers to exfiltrate data from compromised mailbox accounts. Without a way to easily identify malicious rules, forwarding rules can stay in place for months, even after changing account credentials.

 

Microsoft Cloud App Security can now detect and alert on suspicious forwarding rules, giving you the ability to find and delete hidden rules at the source.

 

Malicious forwarding rule names vary, and can have simple names, such as “Forward All Emails“, “Auto forward” or they’re created with deceptive names, such as a nearly hidden “.” In fact, forward rule names can even be empty, and the forwarding target can be one email account or an entire list list. There are even ways to make malicious rules hidden from the user interface. Now, you can use the new Microsoft Cloud App Security detections to analyze and detect suspicious behavior and generate alerts on forwarding rules - even when the rules are seemingly hidden.

 

In nearly all cases, if you detect an unrecognized forwarding rule to an unknown internal or external e-mail address in a user’s inbox rule setting, you can assume that the inbox account was compromised. Once detected, you can leverage this helpful blog post on how to delete hidden rules from specific mailboxes when required.

 

2.pngImage 2: Suspicious inbox forwarding rules - detailed description

Malicious folder manipulation

Another scenario we recognized and built detections for, seems to be used in a later attack phase. Attackers set an inbox rule to delete and/or move emails to a less noticeable folder (i.e “RSS”). These rules move all emails or only those which contain specific target key words. We identified nearly 100 common, relevant words that malicious delete- or move-inbox rules are looking for in a message body and subject. Some of the most popular words we identified in these types of rules include:

 

"superintendent" , "malware" , "malicious" , "suspicious" , "fake" , "scam" , "spam" , "helpdesk" , "technology" , "do not click" , "delete" , "password" , "do not open" , "phishing" , "phish" , "information" , "payment election" , "direct deposit" , "payroll" , "fraud" , "virus" , "hack" , "infect" , "steal" , "attack" , "hijack" , "Payment" , "workday" , "linkedin" , "Workday" , "Payroll" , "received" , "Fraud" , "spyware" , "software" , "attached" , "attachment" , "Help Desk" , "president" , "statement" , "threat" , "VIRUS WARNING" , "DO NOT OPEN" , "FW: Phishing Attempts" , "email" , "regarding" , "URGENT Warning" , "Acknowledge" , "Link" , "disregard" , "did u send me an email" , "Suspicious email" , "Spam" , "Virius" , "Viruis" , "Hack" , "Postmaster" , "Mailer-Daemon" , "Message Undeliverable" , "survey" , "hacked" , "Password" , "linked-in" , "linked in" , "invoice" , "Fidelity Net Benefits" , "Net Benefits" , "401k" , "Fidelity" , "Security code" , "ADP" , "Strategic consultancy services fees - Payment" , "Direct deposit" , "syed" , "Zoominfo" , "zoominfo" , "Re: Fw: Revised Invoice" , "security"

 

Corresponding rule names we saw repeatedly including names such as: 

“xxx", "xxxx" , "." , ".." , ",.,." , "..." , ",." , "dsfghjh" , "At Work" , "words" , "ww" , "dsfghjh" , "email" , "mail" , "Delete messages with specific words" , "Clear categories on mail (recommended)”

 

Attackers use these kinds of rules to manipulate the original mailbox user, remain undetected in the mailbox, and may simultaneously perform internal phishing campaigns using the compromised mailbox. Attackers set rules like these to hide their activities from the original mailbox user and to ensure they can’t see warning alerts about malicious behavior of their own mailbox.

 

Thes rules can be created using various methods. Once an attackers has access to user account credentials, they may log in to the account’s mailbox to set and manipulate rules using https://outlook.office.com. Another option is to use an API that allows the creation of new inbox rules via automated script. The PowerShell New-InboxRule cmdlet is an example of an API that is frequently used by attackers to accomplish this. 

 

3.pngImage 3: Suspicious inbox manipulation rule - detailed alert description

Gaining mailbox access

One method attackers use to gain initial access to an email account is to obtain clear text passwords of the inbox account.

 

Another common scenario to gain initial access to a user’s mailbox account is an OAuth attack, which doesn’t require for the attacker to have the full user credentials at any time. Victim accounts may log in as a third-party cloud application and agree to delegate permissions to change their mailbox settings by the application on their behalf. This scenario requires the user’s consent to delegate their permissions. These interfaces often impersonate legitimate applications the users commonly use and exploit users to gain access to their accounts by requesting high permission levels via the cloud app. In the example below, the attackers used the application name “Outlook” to defy users and eventually push mailbox changes to any authenticated user. To find out more about risky 3rd party app authentications and how to detect and revoke them with Microsoft Cloud App Security, refer to our recent blog post.  

 

4.pngImage 4: Oauth attack of an impersonated cloud app

Rule your inbox

 

Setting and communicating inbox best practices for your organization is always the first step.

 

Ensure each of your inbox owners know: 

 

  • When delegating permissions to an app, verify the requested permissions fit expectations.
  • Always remain suspicious regarding write-permission requests.
  • Consider whether to allow an application to make changes to the mailbox on their behalf, especially without requesting their permission for specific changes.
  • If any evidence of a malicious rule is found, follow the steps in How to stop and remediate the Outlook Rules and Forms attack to remediate.

Microsoft Cloud App Security provides full visibility into your corporate Exchange Online services, enables you to combat malicious rules, cyber threats and control how your data travels. MCAS is available as part of Enterprise Mobility + Security E5 or as a standalone service.

 

More info and feedback

Learn how to get started with Microsoft Cloud App Security with our detailed technical documentation. Don’t have Microsoft Cloud App Security? Start a free trial today!

 

As always, we want to hear from you! If you have any suggestions, questions, or comments, please visit us on our Tech Community page.

 

Chelsio RDMA and Storage Replica Perf on Windows Server 2019 are 💯

$
0
0

Heya folks, Ned here again. Some recent Windows Server 2019 news you may have missed: Storage Replica performance was greatly increased over our original numbers. I chatted about this at earlier Ignite sessions, but when we finally got to Orlando, I was too busy talking about the new Storage Migration Service.

To make up for this, the great folks at Chelsio decided to setup servers and their insane 100Gb T62100-CR iWARP RDMA network adapters, then test the same replication on the same hardware with both Windows Server 2016 and Windows Server 2019; apples and apples, baby. If you’ve been in a coma since 2012, Windows Server uses RDMA for CPU-offloaded SMB Direct high performance data transfer over SMB3. iWARP brings an additional advantage of metro-area ranges while still using TCP for simplified configuration.

The TL; DR is: Chelsio iWARP 100Gb – with SMB 3.1.1 and SMB Direct providing the transport – for Storage Replica is so low latency and so high bandwidth that you can stop worrying about your storage outrunning it. 😂 No matter how much NVME SSD we through at the workload, the storage ran out of IO before the Chelsio network did. It’s such an incredible flip from most of my networking life. We live in magical networking times.

In these tests we used a pair of SuperMicro servers, one with five striped Intel NVME SSDs, one with five striped Micron NVME SSDs. Each had 24 3Ghz Xeon cores and 128GB of memory. They were installed with both Windows Server 2016 RTM and Windows Server 2019 build 17744. A single 1TB volume was formatted on the source storage. Each server got a single-port 100Gb T62100-CR iWARP RDMA network adapter and the latest Chelsio Unified Wire drivers.

Let’s see some numbers and charts!

Initial Block Copy

We started with initial block copy, where Storage Replica must copy every single disk bock from a source partition to a destination partition. Even though the Chelsio iWARP adapter is pushing 94Gb per second at sustained rate – which is as fast as this storage will send and receive CPU overhead is only 5% thanks to offloading. And even 5 RAID-0 NVME SSDs at 100% read on the source and 100% write on the destination couldn’t completely fill that single 100Gb pipe. With SMB multichannel and another RDMA port turned on – this adapter has two – this would have been even less utilized.

That entire 1TB volume replicated in 95 seconds.

People talk about the coming 5G speed revolution and I can’t help but laugh my butt off, tbh. 😁

Continuous Replication

There shouldn’t be much initial sync performance difference between Windows Server 2016 and 2019 because the logs are not used at that phase of replication. They only kick in when block copy is done and you are performing writes on the source. So at this phase two sets of tests were run with the same exact hardware and drivers, but now a few times with Windows Server 2016’s v1 log and a few times with Windows Server 2019’s v1.1 tuned up log.

To perform the test we used Diskspd, a free IO workload creation tool we provide for testing and validation. This is the tool used to ensure that Microsoft Windows Server Software Defined HCI clusters sold by Dell, HPE, DataOn, Fujitsu, Supermicro, NEC, Lenovo, QCT, and others to meet the logo standards for performance and reliability under stress test via a test suite we call “VM Fleet.”

OK, enough Storage Spaces Direct shilling for Cosmos, let’s see how the perf changed between Storage Replica in Windows Server 2016 (aka RS1) and Windows Server 2019 (aka RS5).

The lower orange line shows Windows Server 2016 performance as we hit the replicated volume on the source with 4K, 8K, then 16K IO writes. The upper green line for Windows Server 2019 shows improvements from ~2-3X depending on size for MB per second (that’s a big B for bytes, not bits) and you can see we tuned as carefully as possible for the common 8K IO size. Because we’re using extra wide, low-latency, high-throughput, low-CPU-impacting Chelsio NICs, you’ll never have any bottlenecks due to the network and it will all be dedicated to the actual workload you’re running, not just to being a special “replication network” that are so common in the old world of regular low-bandwidth 1 and 10 Gb TCP dumb adapters.

The Big Sum Up

Storage Replica with Chelsio T6 provides datacenters with high performance data replication over local and remote locations, with the ease of use of TCP instead of Ethernet, and ensuring that your most critical workloads are protected with synchronous replication. Chelsio makes a cost-effective and secure data recovery solution that should appeal to any-sized datacenter or org.

The bottom line: we’re entered a new age for moving all that data around and its name is iWARP. Get on the rocket, IT pros.

Until next time,

– Ned “RDMA good, old networking bad. Me simple man” Pyle


KubeCon, Windows Containers on Kubernetes, and 101 Materials for Your Holiday Reading…

$
0
0

 

 

 

 

 

 

I attended KubeCon earlier this week in Seattle and had some little fun there. It was eye-opening to see the vibrant community there. The energy is enormous. Very fortunate to be a witness of this part of the history. Windows container presence on the showcase floor or in sessions appeared small but the Azure booth was busy with lots of customers asking. I am equally exited and proud as the rest of the container ecosystem that I can play a role being in the core of Windows container technology to make a difference – build and grow this community step by step. And, we definitely need a lot of you to join us in this journey!

In the Sig-Windows maintainers meetup on Wed, hosted by the Co-Chairs Patrick Lang and Michael Michael (see pictures above), I was very happy to meet some keen customers who have been testing or even deploying Windows containers in production . I have also seen growing interest in recent other customers meetings. I thought I should share a compiled list of info related to Windows containers and Kubernetes we have today so we can all learn together. In addition, lots of you have also asked some Windows container 101 materials. There are tons of materials out there. Those our team presented in the Microsoft Ignite conference back in Sept are good starters. I added notes to some interesting demos embedded in our sessions. The list below is not meant to be a full list but a good holiday reading list in case you get bored opening presents:).

KubeCon 2018 Sessions:

General Docs:

  • Windows containers docs: aka.ms/windowscontainers.
    • This is the portal to all docs. If you see things incorrect or missing, let us know or contribute directly!

 Windows Container 101 Sessions from Ignite 2018:

  • BRK2234 – Getting started with Windows Server containers in Windows Server 2019:
    • Video
      • 23:10: High level intro on container identity/gMSA with a demo.
      • 36:00: a quick demo on Docker for Windows with local Kubernetes supporting Windows & Linux containers side by side
    • Slides
  • BRK2237 – From Ops to DevOps with Windows Server containers and Windows Server 2019
    • Video
      • 45:30: Orchestrators including Kubernetes
      • 1:00:00: Windows Admin Center & Container demo
    • Slides
  • BRK2236 – Take the next step with Windows Server container orchestration

You can search for other Ignite sessions here: https://aka.ms/TechCommunity/Microsoft_Ignite2018

Happy Reading, Happy Holidays!

Weijuan

@WeijuanLand

 

RSAT on Windows 10 1809 in Disconnected Environments

$
0
0

Hello everyone, Ty McPherson here, along with some fellow engineers Andreas Pacius and Edwin Gaitan, and we wanted to put together and share some information to help you setup Remote Server Administration Tools with Windows 10 1809.

Starting with Windows 10 v1809 the Remote Server Administration Tools (RSAT) is now a Feature on Demand (FoD). Features can be installed at any time and the requested packages are obtained through Windows Update. However, some of you are not connected to the internet to retrieve these packages, but still need the RSAT features enabled. The below steps will allow you to install some or all of the RSAT features. There are a couple options available to you, so please read through them so you can determine the best course of action to meet your needs.

The first step in all cases is that you need to obtain the FoD media from your Volume License Servicing Center (VLSC). Login and do a search for Features on Demand, ensuring you select the same edition that you want to RSAT enabled on.

Figure 1 – VLSC Search for Features on Demand

Download Disk 1 of the latest release

Figure 2 – Download Disk 1 of the latest release

Before we install the RSAT let’s examine what’s available we’ll use Get-WindowsCapability. Run the following command:

Get-WindowsCapability -online | ? Name -like Rsat* | FT

Figure 3 – Check RSAT FoD Status

Here are your choices, some a great for single quick installations, while others can help make available the FoD resources for the Enterprise.

Option 1:

You can copy the files from the .iso media to a local directory and move to a network share and make it available to the administrative staff.

#Specify ISO Source location

$FoD_Source = $env:USERPROFILEDownloads1809_FoD_Disk1.iso”

#Mount ISO

Mount-DiskImage -ImagePath $FoD_Source

$path = (Get-DiskImage $FoD_Source| Get-Volume).DriveLetter

#Language desired

$lang = “en-US”

#RSAT folder

$dest = New-Item -ItemType Directory -Path $env:SystemDrivetempRSAT_1809_$lang-force

#get RSAT files

Get-ChildItem ($path+“:”) -name -recurse -include *~amd64~~.cab,*~wow64~~.cab,*~amd64~$lang~.cab,*~wow64~$lang~.cab
-exclude *languagefeatures*,*Holographic*,*NetFx3*,*OpenSSH*,*Msix* |
ForEach-Object {copy-item -Path ($path+“:”+$_) -Destination $dest.FullName -Force
-Container}

#get metadata

copy-item ($path+“:metadata”) -Destination $dest.FullName -Recurse

copy-item ($path +“:”+“FoDMetadata_Client.cab”) -Destination $dest.FullName -Force -Container

#Dismount ISO

Dismount-DiskImage -ImagePath $FOD_Source

Use the following PowerShell to install RSAT from the FoD source that was placed on a network share from Option 1.

#Specify ISO Source location

$FoD_Source = C:TempRSAT_1809_en-US”

#Grab the available RSAT Features

$RSAT_FoD = Get-WindowsCapability Online Where-Object Name -like ‘RSAT*’

#Install RSAT Tools

Foreach ($RSAT_FoD_Item in $RSAT_FoD)

{

Add-WindowsCapability -Online -Name $RSAT_FoD_Item.name -Source $FoD_Source -LimitAccess

}

Option 2:

Alternatively, you could mount the .ISO and specify the drive as the source. Using a local drive letter as the -source parameter specified when executing the Add-WindowsCapability PowerShell.

If installing from a mounted ISO the below is an example PowerShell script

#Specify ISO Source location

$FoD_Source = $env:USERPROFILEDownloads1809_FoD_Disk1.iso”

#

#Mount ISO

Mount-DiskImage -ImagePath $FoD_Source

$FoD_Drive = (Get-DiskImage $FoD_Source| Get-Volume).DriveLetter

#Grab the available RSAT Features

$RSAT_FoD = Get-WindowsCapability -Online | Where-Object Name -like ‘RSAT*’

#Install RSAT Tools

Foreach ($RSAT_FoD_Item in $RSAT_FoD)

{

Add-WindowsCapability -Online -Name $RSAT_FoD_Item.name -Source ($FoD_Drive+“:”) -LimitAccess

}

#Dismount ISO

Dismount-DiskImage -ImagePath $FoD_Source

After the installation we’ll use Get-WindowsCapability again to check the status of the RSAT features after the installation.

Figure 4 – Ensure RSAT features installed

Thank you for taking some time to read this and learning about the changes with RSAT in Windows 10 1809. We hope that this will help you as you transition to this recent build of Windows 10.

Good Luck!

System Center Virtual Machine Manager fails to enumerate and manage Logical switch deployed on the host

$
0
0

When Windows update ‘KB4467684’, ‘KB4478877’, ‘KB4471321’ or ‘KB4483229’ is installed on a VMM managed Windows Server 2016 host, VMM is not able to enumerate or manage the Logical Switch deployed on the host. Customers will notice the following symptoms when they open the ‘Virtual Switches’ property of the Host.

  • VMM throws the error ‘An uplink port profile set was not specified on the host network adapter <NetworkAdapterName> and was not supplied with Logical switch <LogicalSwitchName>’
  • The Uplink Port Profile drop down menu will appear empty on the Host Virtual Switch – Logical Switch configuration page.

Cause:

 

The above-mentioned updates are unregistering the following WMI classes used by VMMAgent to enumerate and manage Logical Switch deployed to the Host.

WMI Class MOF File
Scvmm_VirtualEthernetSwitchInternalSettingData Scvmmswitchportsettings.mof
Scvmm_EthernetSwitchPortInternalSettingData
Scvmm_VirtualEthernetSwitchHyperVNetworkVirtualizationSettingData
Msvm_EthernetSwitchPortSCVMMSettingData
Msvmm_DhcpV4PortClientOptionsInfo VMMDHCPSvr.mof
Msvmm_DhcpV4PortBindingOptionsPolicy
Msvmm_DhcpV4PortReservationPolicy
Msvmm_DhcpV4PortPolicy
Msvmm_DhcpV4PortInfo

Running the following PowerShell on the affected host fetches zero objects.

Get-CimClass -Namespace root/virtualization/v2 -classname *vmm*

Note: Running the same PowerShell on a host which does not have these updates installed, fetches VMM related classes.

 

Solution:

 

Use mofcomp to add the VMM related classes and class instances to the WMI repository. On the affected host, run the following commands

Mofcomp “%systemdrive%Program FilesMicrosoft System Center 2016Virtual Machine Managersetupscvmmswitchportsettings.mof”

Mofcomp “%systemdrive%Program FilesMicrosoft System Center 2016Virtual Machine ManagerDHCPServerExtensionVMMDHCPSvr.mof”

 

Note:
  • After you add the classes to WMI repository, you should refresh the host on VMM. Make sure ‘System Center 2016 Update Rollup 6’ is installed on the VMM server as this update brings in improvements to VMM Host refresh time.
  • If you are running System Center Virtual Machine Manager Semi-Annual Channel release (SAC), then the path for mof files would be
%systemdrive%Program FilesMicrosoft System CenterVirtual Machine Managersetupscvmmswitchportsettings.mof
%systemdrive%Program FilesMicrosoft System CenterVirtual Machine ManagerDHCPServerExtensionVMMDHCPSvr.mof
  • If your environment has many hosts, you can script the process to enumerate the hosts from VMM using ‘Get-SCVMHost’ PowerShell Commandlet and use remote PowerShell to register VMM WMI classes as suggested above. 

 

SCCM on Windows Server 2016: The Defender Gotcha

$
0
0

Hello! My name is Todd Linke, and I am a Premier Field Engineer at Microsoft where I specialize in System Center Configuration Manager.

I was working with some customers who were seeing strange behavior on their SCCM Site Servers. In one case, an unusually high percentage of clients had corrupt hardware inventories. Looking at the log files, we could see that client inventories were being successfully sent to the Management Point, but when processed on the site server by SMS_INVENTORY_DATALOADER we were getting a “File in use” error. We used Process Monitor and were able to determine that MsMpEng.exe (Windows Defender) was the process that was locking the file. We turned off “Real-Time Protection” for Defender and the errors suddenly stopped.

What we thought was unusual though, is that they were using a 3rd Party Antivirus solution, which they believed would disable Windows Defender when installed.

In the other case, Software Update Compliance status was missing in action. The MP_FILE_DISPATCH_MONITOR component on the Software Update Point Server was unable to copy client status messages to the proper inboxes on the Primary Site Server. This time the error being reported was “The network path does not exist”. Once again, Process Monitor showed that the files were in use by MsMpEng.exe, and once again, turning off “Real-Time Protection” solved the issue immediately. In this case also, they were using a 3rd party Antivirus solution. At both customers the proper exclusions for SCCM were configured for their 3rd party Antivirus, which would normally prevent these types of issues.

What set these two servers apart from their other SCCM servers is that they were running Windows Server 2016.

As you may or may not know, Microsoft included Windows Defender in Server 2016, where it is enabled by default. Unlike in previous versions of Windows Server, installing a 3rd party Antivirus will not automatically disable Windows Defender. The following page of the Server 2016 online documentation describes exactly how this works:

https://docs.microsoft.com/en-us/windows-server/security/windows-defender/windows-defender-overview-windows-server

There are two solutions for this situation:

  1. Disable Windows Defender Real Time Protection via Group Policy by setting the “Turn off Real-Time Protection” to “Enabled”. You can find more details at the following location:

    https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-antivirus/configure-real-time-protection-windows-defender-antivirus

  2. Configure the recommended SCCM Antivirus Scanning exclusions for Windows Defender using either Group Policy, or SCCM. A great list of SCCM scanning exclusions can be found in this blog post by Brandon McMillan, who is also an SCCM PFE at Microsoft:

    https://blogs.technet.microsoft.com/systemcenterpfe/2017/05/24/configuration-manager-current-branch-antivirus-update/

One of the many great features in SCCM is the ability to use Baselines to monitor SCCM Client devices for specific issues or symptoms. If you would like to verify this in your environment, run the following script on your Site Server to create a Configuration Item and Baseline both named “Verify Windows Defender Real-Time Scanning Status”.

Then deploy the baseline to a collection containing only Windows Server 2016 Devices. Any devices that show Non-Compliant have Real-Time Scanning enabled.

Powershell Code:

#Load SCCM CmdLets

$CMConsolePath = Get-ItemPropertyValue -Path HKLM:SOFTWAREMicrosoftSMSSetup -Name “UI Installation Directory”

$CMModulePath = $CMConsolePathbinConfigurationManager.psd1″

Import-Module $CMModulePath


#Get CM SiteCode

$ProviderInfo = Get-WMIObject -Class SMS_ProviderLocation -Namespace rootSMS -ComputerName $Env:ComputerName

$Sitecode = $($ProviderInfo.SiteCode):”


#Change to CM PSDrive

Set-Location $SiteCode


#Set Discovery Script PS Code

$DiscoveryScript = @”

`$(Get-MPPreference).DisableRealtimeMonitoring

“@


#Create Configuration Item

$ConfigItem = New-CMConfigurationItem -Name “Verify Windows Defender Real-Time Scanning Status” -CreationType WindowsOS


#Add Compliance Rule to CI

$ConfigItem | Add-CMComplianceSettingScript -DataType String -DiscoveryScriptLanguage PowerShell -DiscoveryScriptText $DiscoveryScript -SettingName “Defender Real-Time Protection Setting” -NoRule -Is64Bit

$CompSetting = $ConfigItem | Get-CMComplianceSetting -SettingName “Defender Real-Time Protection Setting”

$CompRule = $CompSetting | New-CMComplianceRuleValue -RuleName “Is False” -ExpressionOperator IsEquals -ExpectedValue “True”

$FinishedCI = $ConfigItem | Add-CMComplianceSettingRule -Rule $CompRule


#Add CI to new Baseline

$CMBaseline = New-CMBaseline -Name $ConfigItem.LocalizedDisplayName

$FinishedBL = Set-CMBaseline -Name $ConfigItem.LocalizedDisplayName -AddOSConfigurationItem $ConfigItem.CI_ID

Thanks for reading!

Infrastructure + Security: Noteworthy News (December, 2018)

$
0
0

Hi there! Stanislav Belov here to bring you the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis.

Microsoft Azure
Introducing the new Azure PowerShell Az module
Starting in December 2018, the Azure PowerShell Az module is in general release and now the intended PowerShell module for interacting with Azure. Az offers shorter commands, improved stability, and cross-platform support. Az also offers feature parity and an easy migration path from AzureRM.
Announcing Azure Dedicated HSM availability
The Microsoft Azure Dedicated Hardware Security Module (HSM) service provides cryptographic key storage in Azure and meets the most stringent customer security and compliance requirements. This service is the ideal solution for customers requiring FIPS 140-2 Level 3 validated devices with complete and exclusive control of the HSM appliance. The Azure Dedicated HSM service uses SafeNet Luna Network HSM 7 devices from Gemalto. This device offers the highest levels of performance and cryptographic integration options and makes it simple for you to migrate HSM-protected applications to Azure. The Azure Dedicated HSM is leased on a single-tenant basis.
An easy way to bring back your Azure VM with In-Place restore
We are excited to announce In-Place restore of disks in IaaS VMs along with simplified restore improvements in Azure Backup. This feature helps roll back or fix corrupted virtual machines through in-place restore without the needs of spinning up a new VM. With the introduction of this feature, customers have multiple choices for IaaS VM restore like create new VM, Restore Disks and Replace disks.
Windows Server
Windows Server 2019 Includes OpenSSH

The OpenSSH client and server are now available as a supported Feature-on-Demand in Windows Server 2019 and Windows 10 1809! The Win32 port of OpenSSH was first included in the Windows 10 Fall Creators Update and Windows Server 1709 as a pre-release feature. In the Windows 10 1803 release, OpenSSH was released as a supported feature on-demand component, but there was not a supported release on Windows Server until now.

Windows Client
Microsoft Edge: Making the web better through more open source collaboration

For the past few years, Microsoft has meaningfully increased participation in the open source software (OSS) community, becoming one of the world’s largest supporters of OSS projects. Today we’re announcing that we intend to adopt the Chromium open source project in the development of Microsoft Edge on the desktop to create better web compatibility for our customers and less fragmentation of the web for all web developers. As part of this, we intend to become a significant contributor to the Chromium project, in a way that can make not just Microsoft Edge — but other browsers as well — better on both PCs and other devices.

Security
The evolution of Microsoft Threat Protection, December update

December was another month of significant development for Microsoft Threat Protection capabilities. As a quick recap, Microsoft Threat Protection is an integrated solution securing the modern workplace across identities, endpoints, user data, cloud apps, and infrastructure. Last month, we shared updates on capabilities for securing identities, endpoints, user data, and cloud apps. This month, we provide an update for Azure Security Center which secures organizations from threats across hybrid cloud workloads. Additionally, we overview a real-world scenario showcasing Microsoft Threat Protection in action.

Tackling phishing with signal-sharing and machine learning
Across services in Microsoft Threat Protection, the correlation of security signals enhances the comprehensive and integrated security for identities, endpoints, user data, cloud apps, and infrastructure. Our industry-leading visibility into the entire attack chain translates to enriched protection that’s evident in many different attack scenarios, including flashy cyberattacks, massive malware campaigns, and even small-scale, localized attacks.
Zero Trust part 1: Identity and access management
Once in a while, a simple phrase captures our imagination, expressing a great way to think about a problem. Zero Trust is such a phrase. Today, I’ll define Zero Trust and then discuss the first step to enabling a Zero Trust model—strong identity and access management. In subsequent blogs, we’ll cover each capability of a Zero Trust model in detail and how Microsoft helps you in these areas and end the series of blogs by discussing Microsoft’s holistic approach to Zero Trust and our framework.
Rule your inbox with Microsoft Cloud App Security
As part of our ongoing research to analyze trends and attack techniques, the Microsoft Cloud App Security team was able to deploy two new detection methods to help tackle malicious activities against Exchange inbox accounts protected with Microsoft Cloud App Security. Since we’ve started rolling out these new detections, we are seeing more than 3,000 suspicious rule alerts each month.
Insights from the MITRE ATT&CK-based evaluation of Windows Defender ATP
In MITRE’s evaluation of endpoint detection and response solutions, Windows Defender Advanced Threat Protection demonstrated industry-leading optics and detection capabilities. The breadth of telemetry, the strength of threat intelligence, and the advanced, automatic detection through machine learning, heuristics, and behavior monitoring delivered comprehensive coverage of attacker techniques across the entire attack chain.
Windows Defender ATP device risk score exposes new cyberattack, drives Conditional access to protect networks
Several weeks ago, the Windows Defender Advanced Threat Protection (Windows Defender ATP) team uncovered a new cyberattack that targeted several high-profile organizations in the energy and food and beverage sectors in Asia. Given the target region and verticals, the attack chain, and the toolsets used, we believe the threat actor that the industry refers to as Tropic Trooper was likely behind the attack.
Reduce your potential attack surface using Azure ATP Lateral Movement Paths
Azure Advanced Threat Protection (Azure ATP) provides invaluable insights on identity configurations and suggested security best-practices across the enterprise. A key component of Azure ATP’s insights is Lateral Movement Paths or LMPs. Azure ATP LMPs are visual guides that help you quickly understand and identify exactly how attackers can move laterally inside your network. The purpose of lateral movements within a cyber-attack kill chain are for attackers to gain and compromise your sensitive accounts towards domain dominance. Azure ATP LMPs provide easy to interpret, direct visual guidance on your most vulnerable sensitive accounts, assists in helping you mitigate and close access for potential attacker domain dominance.
Analysis of cyberattack on U.S. think tanks, non-profits, public sector by unidentified attackers
Reuters recently reported a hacking campaign focused on a wide range of targets across the globe. In the days leading to the Reuters publication, Microsoft researchers were closely tracking the same campaign. Our sensors revealed that the campaign primarily targeted public sector institutions and non-governmental organizations like think tanks and research centers, but also included educational institutions and private-sector corporations in the oil and gas, chemical, and hospitality industries.
Vulnerabilities and Updates
Out of Band (OOB) Security Update Released for Internet Explorer for all supported versions of Windows Client and Server

A remote code execution vulnerability exists in the way that the scripting engine handles objects in memory in Internet Explorer. The vulnerability could corrupt memory in such a way that an attacker could execute arbitrary code in the context of the current user. An attacker who successfully exploited the vulnerability could gain the same user rights as the current user. If the current user is logged on with administrative user rights, an attacker who successfully exploited the vulnerability could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

Windows monthly security and quality updates overview

Today’s global cybersecurity threats are both dynamic and sophisticated, and new vulnerabilities are discovered almost every day. We focus on protecting customers from these security threats by providing security updates on a timely basis and with high quality. We strive to help you keep your Windows devices, regardless of which version of Windows they are running, up to date with the latest monthly quality updates to help mitigate the evolving threat landscape. Here is an overview of how we deliver these critical updates on a massive scale as a key component of our ongoing Windows as a service effort.

Support Lifecycle
End of Support for SCEP for Mac and SCEP for Linux on December 31, 2018

Support for System Center Endpoint Protection (SCEP) for Mac and Linux (all versions) ends on December 31, 2018. Availability of new virus definitions for SCEP for Mac and SCEP for Linux may be discontinued after the end of support. This discontinuation may occur without notice. If you are using any version of SCEP for Mac or SCEP for Linux, plan to migrate to a replacement endpoint protection product for Mac and Linux clients.

Extended Security Updates for SQL Server and Windows Server 2008/2008 R2: Frequently Asked Questions (PDF)

On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. That means the end of regular security updates. Don’t let your infrastructure and applications go unprotected. We’re here to help you migrate to current versions for greater security, performance and innovation.

Products reaching End of Support for 2018

Products reaching End of Support for 2019

Products reaching End of Support for 2020

Microsoft Premier Support News
We are excited to announce the release of a new service: Activate Azure with Automated Deployments. In this two-day service, customers will learn about Azure Resource Manager (ARM) Templates, Automation Runbooks, Desired State Configuration (DSC), and Azure Automation. Customers will apply the learning with a proof of concept showcasing an end-to-end solution using Azure Automation to deploy a SharePoint farm including SQL Server from the ground up. Experience the power and flexibility of Infrastructure as Code and understand how it fits into Azure DevOps best practices.
Activate Windows Hello for Business is a 3 day remote or onsite service that allows customers organizations to learn what is needed to implement Windows Hello for Business depending on your environment. It will setup a Proof of Concept (POC) that showcases Windows Hello for Business based on the On-Premise Key Trust Model. This model contains all components of Windows Hello for Business allowing you to get hand-on experience and understand how your organization can benefit from password-less authentication.
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

Windows 10 and reserved storage

$
0
0

Reserving disk space to keep Windows 10 up to date


Windows Insiders: To enable this new feature now, please see the last section “Testing out Storage Reserve” and complete the quest


Starting with the next major update we’re making a few changes to how Windows 10 manages disk space. Through reserved storage, some disk space will be set aside to be used by updates, apps, temporary files, and system caches. Our goal is to improve the day-to-day function of your PC by ensuring critical OS functions always have access to disk space. Without reserved storage, if a user almost fills up her or his storage, several Windows and application scenarios become unreliable. Windows and application scenarios may not work as expected if they need free space to function. With reserved storage, updates, apps, temporary files, and caches are less likely to take away from valuable free space and should continue to operate as expected.Reserved storage will be introduced automatically on devices that come with version 1903 pre-installed or those where 1903 was clean installed. You don’t need to set anything up—this process will automatically run in the background. The rest of this blog post will share additional details on how reserved storage can help optimize your device.

How does it work?

When apps and system processes create temporary files, these files will automatically be placed into reserved storage. These temporary files won’t consume free user space when they are created and will be less likely to do so as temporary files increase in number, provided that the reserve isn’t full. Since disk space has been set aside for this purpose, your device will function more reliably. Storage sense will automatically remove unneeded temporary files, but if for some reason your reserve area fills up Windows will continue to operate as expected while temporarily consuming some disk space outside of the reserve if it is temporarily full.

Windows Updates made easy

Updates help keep your device and data safe and secure, along with introducing new features to help you work and play the way you want. Every update temporarily requires some free disk space to download and install. On devices with reserved storage, update will use the reserved space first.

When it’s time for an update, the temporary unneeded OS files in the reserved storage will be deleted and update will use the full reserve area. This will enable most PCs to download and install an update without having to free up any of your disk space, even when you have minimal free disk space. If for some reason Windows update needs more space than is reserved, it will automatically use other available free space. If that’s not enough, Windows will guide you through steps to temporarily extend your hard disk with external storage, such as with a USB stick, or how to free up disk space.

How much of my storage is reserved?

In the next major release of Windows (19H1), we anticipate that reserved storage will start at about 7GB, however the amount of reserved space will vary over time based on how you use your device. For example, temporary files that consume general free space today on your device may consume space from reserved storage in the future. Additionally, over the last several releases we’ve reduced the size of Windows for most customers. We may adjust the size of reserved storage in the future based on diagnostic data or feedback. The reserved storage cannot be removed from the OS, but you can reduce the amount of space reserved. More details below.

The following two factors influence how reserved storage changes size on your device:

  • Optional features. Many optional features are available for Windows. These may be pre-installed, acquired on demand by the system, or installed manually by you. When an optional feature is installed, Windows will increase the amount of reserved storage to ensure there is space to maintain this feature on your device when updates are installed. You can see which features are installed on your device by going to Settings > Apps > Apps & features > Manage optional features. You can reduce the amount of space required for reserved storage on your device by uninstalling optional features you are not using.
  • Installed Languages. Windows is localized into many languages. Although most of our customers only use one language at a time, some customers switch between two or more languages. When additional languages are installed, Windows will increase the amount of reserved storage to ensure there is space to maintain these languages when updates are installed. You can see which languages are installed on your device by going to Settings > Time & Language > Language. You can reduce the amount of space required for reserved storage on your device by uninstalling languages you aren’t using.

Follow these steps to check the reserved storage size: Click Start > Search for “Storage settings” > Click “Show more categories” > Click “System & reserved” > Look at the “Reserved storage” size.

Testing out reserved storage

This feature is available to Windows Insiders running Build 18298 or newer.

Step 1: Become a Windows Insider

The Windows Insider Program brings millions of people around the world together to shape the next evolution of Windows 10. Become an Insider to gain exclusive access to upcoming Windows 10 features and the ability to submit feedback directly to Microsoft Engineers. Learn how to get started: Windows Insiders Quick Start

Step 2: Complete this quest to start using this feature.


Aaron Lower contributed to this post.
Follow Aaron Lower on LinkedIn
Follow Jesse Rajwan on LinkedIn

The PowerShell-Docs repo is moving

$
0
0

On January 16, 2019 at 5:00PM PDT, the PowerShell-Docs repositories are moving from the PowerShell
organization to the MicrosoftDocs organization in GitHub.

The tools we use to build the documentation are designed to work in the MicrosoftDocs org. Moving
the repository lets us build the foundation for future improvements in our documentation experience.

Impact of the move

During the move there may be some downtime. The affected repositories will be inaccessible during
the move process. Also, the documentation processes will be paused. After the move, we need to test
access permissions and automation scripts.

After these tasks are complete, access and operations will return to normal. GitHub automatically
redirects requests to the old repo URL to the new location.

For more information about transferring repositories in GitHub,
see About repository transfers.

  • If the transferred repository has any forks, then those forks will remain associated with the
    repository after the transfer is complete.
  • All Git information about commits, including contributions, are preserved.
  • All of the issues and pull requests remain intact when transferring a repository.
  • All links to the previous repository location are automatically redirected to the new location.

When you use git clone, git fetch, or git push on a transferred repository, these commands will
redirect to the new repository location or URL.

However, to avoid confusion, we strongly recommend updating any existing local clones to point to
the new repository URL. You can do this by using git remote on the command line:

git remote set-url origin new_url

For more information, see Changing a remote’s URL.

Which repositories are being moved?

The following repositories are being transferred:

  • PowerShell/PowerShell-Docs
  • PowerShell/powerShell-Docs.cs-cz
  • PowerShell/powerShell-Docs.de-de
  • PowerShell/powerShell-Docs.es-es
  • PowerShell/powerShell-Docs.fr-fr
  • PowerShell/powerShell-Docs.hu-hu
  • PowerShell/powerShell-Docs.it-it
  • PowerShell/powerShell-Docs.ja-jp
  • PowerShell/powerShell-Docs.ko-kr
  • PowerShell/powerShell-Docs.nl-nl
  • PowerShell/powerShell-Docs.pl-pl
  • PowerShell/powerShell-Docs.pt-br
  • PowerShell/powerShell-Docs.pt-pt
  • PowerShell/powerShell-Docs.ru-ru
  • PowerShell/powerShell-Docs.sv-se
  • PowerShell/powerShell-Docs.tr-tr
  • PowerShell/powerShell-Docs.zh-cn
  • PowerShell/powerShell-Docs.zh-tw

Call to action

If you have a fork that you cloned, change your remote configuration to point to the new upstream URL.

Help us make the documentation better.

  • Submit issues when you find a problem in the docs.
  • Suggest fixes to documentation by submitting changes through the PR process.

 

Sean Wheeler
Senior Content Developer for PowerShell
https://github.com/sdwheeler


Automating Security workflows with Microsoft’s CASB and MS Flow

$
0
0

As Cloud Security is becoming an increasingly greater concern for organizations of all sizes, the role and importance of Security Operations Centers (SOC) continues to expand. While end users leverage new cloud apps and services daily, Security professionals that keep track of security incidents remain a scarce resource. Consequently, SOC teams are looking for solutions that help automate processes where possible, to reduce the number of incidents that require their direct oversight and interaction.

 

Microsoft Cloud App Security now integrates with Microsoft Flow to provide centralized alert automation and orchestration of custom workflows - on your terms. It enables the use of an ecosystem of connectors in Microsoft Flow to create playbooks that work with the systems of your choice, existing processes you may already have, and enables organizations to automate the triage of alerts.

 

SOC teams are tasked with two functional areas - monitoring security incidents and taking action based on the available information, to uphold or restore the Security of an organization.

 

They are expected to implement and support technology solutions that can sustain virtually every phase of enterprise activity. But as cyberthreats continue to evolve and business units leverage an ever-increasing number of new cloud apps and services, SOC teams struggle to respond to- and recover from security incidents.

 

Microsoft Cloud App Security’s new integration with Microsoft Flow provides a series of powerful use cases to enable centralized alert automation and orchestration, leveraging out-of-the-box and custom workflow playbooks that work with the systems of your choice. With connectors for more than 100 3rd party solutions, such as ServiceNow, Jira and SAP, the integration could remove the need to send alerts to a SIEM or write custom code for simple workflows.

 

Use cases:

With these powerful services now natively integrated, we’ve created a list of scenarios based on common customer requests that can help you streamline your own processes.

 

Monitoring

1.  Routing CAS alerts to different SOC units

Large, global organizations often have dedicated SOC teams who oversee either specific departments or regions to enable them to triage more effectively.

 

Consequently, a key ask has been for our CASB solution to allow organizations to setup similar routing to assign the alerts to the relevant SOC teams, when new alerts are raised.

 

Via the native integration with Microsoft Flow, ticket routing can now be based on the type of alert, Azure AD attributes such as user location, email address, UPN and more, providing a fully flexible model to route alerts based on the setup of your SOC teams and make them work for your organization.

 

Figure 1 shows the distribution to the relevant SOC teams, when an alert is generated. Playbook is configured to look up the user office location in Azure AD. If it’s North America (NA), it will post a message in the NA SOC channel on Microsoft Teams. If the user’s location is identified as Asia, the playbook includes a lookup of the user’s job title, to take a custom action if the user is a VP.

 

EMS1.pngFigure 1: Playbook to route CAS alerts to different SOC units

 

 

2.  Automatic ticket generation in Management tools like Jira or ServiceNow when a CAS alert is raised

Many organizations use ticketing systems like ServiceNow or Jira to investigate alerts generated by Cloud App Security. By using the ServiceNow connector in Flow, you can create a playbook to automatically create an incident in ServiceNow when Cloud App Security generates an alert. Incidents can be populated with alert attributes such as description, severity and user information, to help with alert investigation. Flow also has connectors for Slack and Jira to execute similar workflows in those services.

 

EMS2.pngFigure 2: Playbook to create incident in ticketing systems

 

 

Automating response

3.  Request manager approval to execute actions (ex. Disable user account) for CAS alert

While investigating an alert, SOC analysts may sometimes require approval from a manager to execute certain actions - such as disabling the user account. By creating a playbook in Flow using Outlook and Azure AD connectors, you can automatically execute this workflow when Cloud App Security generates an alert. Based on the response, the playbook can also dismiss the alert as false positive or resolve the alert after the investigation has completed.

 

In the below example, a playbook is configured to post a message for the SOC team and send an email to the manager to request input on how to investigate the alert.

 

EMS3.pngFigure 3: E-mail requesting manager input for alert investigation

 

 

4.  Request user input to investigate CAS alert

Certain alert types, such as an “Activity from infrequent country” alert may require additional input or context from the affected user, for the security operation teams to act on. In these cases, we can create a playbook to send a text or email to the user for two factor confirmation that activity in CAS indeed originated from the user.

 

EMS4.pngFigure 4: Send text message to user to confirm user activity

 

 

5.  Block unsanctioned apps on the firewall using CAS discovery alerts

By using Cloud App Security Discovery policies, security teams can identify apps that do not meet the guidelines established by an organization. When Cloud App Security generates a discovery alert for such an application, we can execute a playbook to automatically block that application domain on the firewall. To execute the configuration change on the firewall, we are using the HTTP connector and custom code with firewall API since some, in this case Palo Alto, don’t have a connector in Flow. If Firewall configuration changes need to be approved by the networking team, you can use the Outlook connector to get their approval prior to executing the domain block changes as part of the same Flow.

 

EMS5.pngFigure 5: Flow configuration to block unsanctioned app domains on firewall

 

 

With this new integration, you can now leverage Microsoft Cloud App Security as a fully integrated solution in your security operations setup to ultimately save time and optimize the use of your security resources by automating key processes.

 

More info and feedback

If you want to help us create more powerful workflow playbooks, provide suggestions and feedback on Flow Community site.

 

Learn how to get started with Microsoft Cloud App Security with our detailed technical documentation. Don’t have Microsoft Cloud App Security? Start a free trial today!

 

As always, we want to hear from you! If you have any suggestions, questions, or comments, please visit us on our Tech Community page.

DSC Resource Kit Release January 2019

$
0
0

We just released the DSC Resource Kit!

This release includes updates to 14 DSC resource modules. In the past 6 weeks, 41 pull requests have been merged and 54 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • ActiveDirectoryCSDsc
  • AuditPolicyDsc
  • CertificateDsc
  • ComputerManagementDsc
  • NetworkingDsc
  • SecurityPolicyDsc
  • SqlServerDsc
  • StorageDsc
  • xActiveDirectory
  • xBitlocker
  • xExchange
  • xFailOverCluster
  • xHyper-V
  • xWebAdministration

Several of these modules were released to remove the hidden files/folders from this issue. This issue should now be fixed for all modules except DFSDsc which is waiting for some fixes to its tests.

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

Our latest community call for the DSC Resource Kit was today, January 9. A recording is available on YouTube here. Join us for the next call at 12PM (Pacific time) on February 13 to ask questions and give feedback about your experience with the DSC Resource Kit.

The next DSC Resource Kit release will be on Wednesday, February 20.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don’t forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

Please see our documentation here for information on the support of these resource modules.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or CHANGELOG.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module Name Version Release Notes
ActiveDirectoryCSDsc 3.1.0.0
  • Updated LICENSE file to match the Microsoft Open Source Team standard.
  • Added .VSCode settings for applying DSC PSSA rules – fixes Issue 60.
  • Added fix for two tier PKI deployment fails on initial deployment, not error – fixes Issue 57.
AuditPolicyDsc 1.4.0.0
  • Explicitly removed extra hidden files from release package
CertificateDsc 4.3.0.0
  • Updated certificate import to only use Import-CertificateEx – fixes Issue 161
  • Update LICENSE file to match the Microsoft Open Source Team standard -fixes Issue 164.
  • Opted into Common Tests – fixes Issue 168:
    • Required Script Analyzer Rules
    • Flagged Script Analyzer Rules
    • New Error-Level Script Analyzer Rules
    • Custom Script Analyzer Rules
    • Validate Example Files To Be Published
    • Validate Markdown Links
    • Relative Path Length
  • CertificateExport:
    • Fixed bug causing PFX export with matchsource enabled to fail – fixes Issue 117
ComputerManagementDsc 6.1.0.0
  • Updated LICENSE file to match the Microsoft Open Source Team standard. Fixes Issue 197.
  • Explicitly removed extra hidden files from release package
NetworkingDsc 6.3.0.0
  • MSFT_IPAddress:
    • Updated to allow retaining existing addresses in order to support cluster configurations as well
SecurityPolicyDsc 2.7.0.0
  • Bug fix – Issue 83 – Network_access_Remotely_accessible_registry_paths_and_subpaths correctly applies multiple paths
  • Update LICENSE file to match the Microsoft Open Source Team standard
SqlServerDsc 12.2.0.0
  • Changes to SqlServerDsc
    • During testing in AppVeyor the Build Worker is restarted in the install step to make sure the are no residual changes left from a previous SQL Server install on the Build Worker done by the AppVeyor Team (issue 1260).
    • Code cleanup: Change parameter names of Connect-SQL to align with resources.
    • Updated README.md in the Examples folder.
      • Added a link to the new xADObjectPermissionEntry examples in ActiveDirectory, fixed a broken link and a typo. Adam Rush (@adamrushuk)
    • Change to SqlServerLogin so it doesn”t check properties for absent logins.
StorageDsc 4.4.0.0
  • Refactored module folder structure to move resource to root folder of repository and remove test harness – fixes Issue 169.
  • Updated Examples to support deployment to PowerShell Gallery scripts.
  • Removed limitation on using Pester 4.0.8 during AppVeyor CI.
  • Moved the Code of Conduct text out of the README.md and into a CODE_OF_CONDUCT.md file.
  • Explicitly removed extra hidden files from release package
xActiveDirectory 2.23.0.0
  • Explicitly removed extra hidden files from release package
xBitlocker 1.4.0.0
  • Change double quoted string literals to single quotes
  • Add spaces between array members
  • Add spaces between variable types and variable names
  • Add spaces between comment hashtag and comments
  • Explicitly removed extra hidden files from release package
xExchange 1.26.0.0
  • Add support for Exchange Server 2019
  • Added additional parameters to the MSFT_xExchUMService resource
  • Rename improperly named functions, and add comment based help in MSFT_xExchClientAccessServer, MSFT_xExchDatabaseAvailabilityGroupNetwork, MSFT_xExchEcpVirtualDirectory, MSFT_xExchExchangeCertificate, MSFT_xExchImapSettings.
  • Added additional parameters to the MSFT_xExchUMCallRouterSettings resource
  • Rename improper function names in MSFT_xExchDatabaseAvailabilityGroup, MSFT_xExchJetstress, MSFT_xExchJetstressCleanup, MSFT_xExchMailboxDatabase, MSFT_xExchMailboxDatabaseCopy, MSFT_xExchMailboxServer, MSFT_xExchMaintenanceMode, MSFT_xExchMapiVirtualDirectory, MSFT_xExchOabVirtualDirectory, MSFT_xExchOutlookAnywhere, MSFT_xExchOwaVirtualDirectory, MSFT_xExchPopSettings, MSFT_xExchPowershellVirtualDirectory, MSFT_xExchReceiveConnector, MSFT_xExchWaitForMailboxDatabase, and MSFT_xExchWebServicesVirtualDirectory.
  • Add remaining unit and integration tests for MSFT_xExchExchangeServer.
xFailOverCluster 1.12.0.0
  • Explicitly removed extra hidden files from release package
xHyper-V 3.15.0.0
  • Explicitly removed extra hidden files from release package
xWebAdministration 2.4.0.0
  • Explicitly removed extra hidden files from release package

How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module’s name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available starting in WMF 5.0) to find modules with DSC Resources:

# To list all modules that tagged as DSCResourceKit
Find-Module -Tag DSCResourceKit 
# To list all DSC resources from all sources 
Find-DscResource

Please note only those modules released by the PowerShell Team are currently considered part of the ‘DSC Resource Kit’ regardless of the presence of the ‘DSC Resource Kit’ tag in the PowerShell Gallery.

To find a specific module, go directly to its URL on the PowerShell Gallery:
http://www.powershellgallery.com/packages/< module name >
For example:
http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module -Name < module name >

For example:

Install-Module -Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

Update-Module

After installing modules, you can discover all DSC resources available to your local system with this command:

Get-DscResource

How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:
https://github.com/PowerShell/< module name >
For example, for the CertificateDsc module, go to:
https://github.com/PowerShell/CertificateDsc.

All DSC modules are also listed as submodules of the DscResources repository in the DscResources folder and the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:
https://github.com/PowerShell/< module name >/issues
For example:
https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Kragenbrink
Software Engineer
PowerShell DSC Team
@katiedsc (Twitter)
@kwirkykat (GitHub)

New experience for alerts generated by monitors in SCOM 2019

$
0
0

The existing alert closure experience for the alerts generated by monitors has been revamped to be more meaningful and provide better value.

If the alert was generated by a monitor, as a best practice, you should allow the monitor to auto-resolve the alert when the health state returns to healthy or close the alert manually when the health state returns to healthy (if auto-resolve is set to false).

If you close the alert while the object is in a warning, critical or unhealthy state, the problem remains unresolved, and no further alerts are generated, unless the health state for the monitor has also been reset (If the monitor is not reset, the same condition that generated an alert can occur again but no alert will be generated because the health state has not changed.)

This behaviour, which often led to a scenario where there is no active alert in the system while an underlying problem is not resolved. Closure of alerts generated by monitors without resolving the underlying problem is fixed with SCOM 2019. An alert which has been generated by a monitor cannot be closed unless the health state of the corresponding monitor is healthy.

Behavior in operations console

If you close an alert generated by a monitor (from the Operations Console “Active alerts” view) which is in a unhealthy state then the following message will be displayed and the alert will not be closed:

“Alert(s) in the current selection cannot be closed as the monitor(s) which generated these alerts are still unhealthy. For more details on the alerts which could not be closed, view the “Alert Closure Failure” dashboard in the Operations Manager Web Console”

    To close this alert the health state of the monitor has to be reset, if “auto-resolve” for this monitor is set to true then the alert will be auto closed with the health state reset else the alert has to be manually closed after the health state reset.

    Behaviour in Web console

    If you close an alert generated by a monitor (from the “Alert Alerts Dashboard” or any dashboard or the alerts drill down page of the web console) which is in a unhealthy state then the following message will be displayed and the alert will not be closed:

    Active alerts dashboard (closing 1 alert generated by monitor which is in a unhealthy state, by using the “Set resolution state” action)

    Alerts drill down page (closing the alert generated by monitor which is in a unhealthy state, by changing the “Resolution State”)

    To forcefully close these kind of alerts, reset the health state of the monitor from the task available in the alerts drill down page:

    Or

    Navigate to the new “Alert Closure Failure” dashboard available in the monitoring tree of the web console, this dashboard lists all the active alerts in SCOM which were unable to close because the monitor which generated these respective alert is still unhealthy. You can select the alert which you want to forcefully close and reset the corresponding monitor by using the “Reset Health” action.

    Note: This dashboards displays all the active alerts which were unable to close, irrespective of the tool from where the alert closure has been triggered.

    If an alert closure has been triggered from the third party tools/systems (incident management/ticketing systems…) and if the alert was unable to close as the corresponding monitor is still unhealthy then we will be passing an exception with the alert details which can be leveraged by third party tools/systems.

    The following 2 APIs have been enhanced to enable this new behaviour (more detailed documentation on the changes to the below APIs will be published soon):

    Windows Security change affecting PowerShell

    $
    0
    0

    Windows Security change affecting PowerShell

    January 9, 2019

    The recent (1/8/2019) Windows security patch CVE-2019-0543, has introduced a breaking change for a PowerShell remoting scenario. It is a narrowly scoped scenario that should have low impact for most users.

    The breaking change only affects local loopback remoting, which is a PowerShell remote connection made back to the same machine, while using non-Administrator credentials.

    PowerShell remoting endpoints do not allow access to non-Administrator accounts by default. However, it is possible to modify endpoint configurations, or create new custom endpoint configurations, that do allow non-Administrator account access. So you would not be affected by this change, unless you explicitly set up loopback endpoints on your machine to allow non-Administrator account access.

    Example of broken loopback scenario

    # Create endpoint that allows Users group access
    PS > Register-PSSessionConfiguration -Name MyNonAdmin -SecurityDescriptorSddl 'O:NSG:BAD:P(A;;GA;;;BA)(A;;GA;;;BU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD)' -Force
    
    # Create non-Admin credential
    PS > $nonAdminCred = Get-Credential ~NonAdminUser
    
    # Create a loopback remote session to custom endpoint using non-Admin credential
    PS > $session = New-PSSession -ComputerName localhost -ConfigurationName MyNonAdmin -Credential $nonAdminCred
    
    New-PSSession : [localhost] Connecting to remote server localhost failed with the following error message : The WSMan
    service could not launch a host process to process the given request.  Make sure the WSMan provider host server and
    proxy are properly registered. For more information, see the about_Remote_Troubleshooting Help topic.
    At line:1 char:1
    + New-PSSession -ComputerName localhost -ConfigurationName MyNonAdmin - ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotin
       gTransportException
        + FullyQualifiedErrorId : -2146959355,PSSessionOpenFailed
    

    The above example fails only when using non-Administrator credentials, and the connection is made back to the same machine (localhost). Administrator credentials still work. And the above scenario will work when remoting off-box to another machine.

    Example of working loopback scenario

    # Create Admin credential
    PS > $adminCred = Get-Credential ~AdminUser
    
    # Create a loopback remote session to custom endpoint using Admin credential
    PS > $session = New-PSSession -ComputerName localhost -ConfigurationName MyNonAdmin -Credential $adminCred
    PS > $session
    
     Id Name            ComputerName    ComputerType    State         ConfigurationName     Availability
     -- ----            ------------    ------------    -----         -----------------     ------------
      1 WinRM1          localhost       RemoteMachine   Opened        MyNonAdmin               Available

    The above example uses Administrator credentials to the same MyNonAdmin custom endpoint, and the connection is made back to the same machine (localhost). The session is created successfully using Administrator credentials.

    The breaking change is not in PowerShell but in a system security fix that restricts process creation between Windows sessions. This fix is preventing WinRM (which PowerShell uses as a remoting transport and host) from successfully creating the remote session host, for this particular scenario. There are no plans to update WinRM.

    This affects Windows PowerShell and PowerShell Core 6 (PSCore6) WinRM based remoting.

    This does not affect SSH remoting with PSCore6.

    This does not affect JEA (Just Enough Administration) sessions.

    A workaround for a loopback connection is to always use Administrator credentials.

    Another option is to use PSCore6 with SSH remoting.

    Paul Higinbotham
    Senior Software Engineer
    PowerShell Team

    Secure Credentials with Self-Signed Certificates for PowerShell Scripts

    $
    0
    0

    Hello everyone, I’m Preston K. Parsard, specializing in Platforms, Azure Infrastructure and Automation topics, and I’d like to share some insights for securing PowerShell credentials using certificates. This post is based on a recent customer project, but we’ll also wrap a story around it on behalf of our made-up friends at our fictitous company Adatum.com.

    Adatum.com is a new IT consulting firm, and the DevSecOps team is evaluating approaches to securing PowerShell credentials so they can recommend the most suitable method for their customers considering credential security for scripting. At a minimum, the team also wants to adhere to the ISO/IEC 2700 family of standards to help their customers keep information assets secure when implementing any recommended solution. The sprint cycle for this team is an aggressive one week, which is how much time they schedule with each customer to provide a solution, so let’s begin our initial lightning sprint together!

    Meet the Team


    Figure 1: Adatum DevSecOps Team

        Dev: Dave Delta

        Dave is the developer and a recent computer science college graduate focusing on .NET, Python     and PowerShell for infrastructure development projects.

        Sec: Samantha Sierra

        Samantha is the team lead and an expeirenced engineer with a 10 year background in IT     security, and have also held development, operations and management roles throughout her     career.

        Ops: Oliver Oscar

        Oliver is fairly new to IT and has 3 years experience with Windows administration and Active     Directory. (Yes, I realize that Dave and Oliver have a striking resemblance, but they’re actually not twins. Trust me. 😉 )

    This post discusses:

    •    The reasons for making secure credentials available for interactive or scheduled scripts

    •    Certificate requirements

    •    Infrastructure requirements

    •    Scenarios for accounts to host combinations

    •    Creating and using certificates for credential encryption and decryption in scripts

    Technologies discussed:

    PowerShell, PKI

    Project Site: http://bit.ly/2Q8KY9m

     

    Why Should we Make Secure Credentials avaialble for scripts?

    Samantha covers the rationale provided by Fabrikam, the teams customer, for secure retrieval and use of credentials in their PowerShell scripts. It’s Monday, the first day of the weekly sprint cycle at the dialy scrum standup. The customer is Fabrikam, which is a small regional petro-chemical refining company with operations in the south central US. They want to be able to launch a script either interactivlely or by means of a scheduled task or job, and after the initial setup, not be prompted for any required credential sets associated with service accounts during script execution.

    One particular use case cited by the customer is to interactively and remotely test DNS name resolution for their 700+ member servers in their primary data center. This is because they’re planning a major DNS migration project and need to test and report the results for a pilot subset of servers for name resolution, before expanding the scope to the entire set of systems. They want to be able to do this on demand to verify name resolution, troubleshoot or spot check any inconsistencies for multiple servers at a time, but not have to worry about always being prompted for and supplying credentials to reach these remote machines.

    Since Fabrikam has several administrators with Active Directory priviledged accounts that will be running commands or scripts, they also need to support scenarios where multiple administrators can log onto multiple machines, either on a set of designated jump/development servers or each individuals own workstation to launch interactive code or setup scheduled scripts. The solution must only allow administrators to import or use the encryption/decryption certificate if they know the private key password. This is like using a shared secret that will allow the flexibility of the multiple accounts, multiple hosts scenario. When the combined certificate creation/password retrieval script runs, it should also implement transcript logging so that an audit trail is maintained. Furthermore, transcript logging will aid diagnostic efforts and debugging when required.

    While there are other alternatives for achieving password encryption, such as issuing certificates from their PKI servers or using the ConvertFrom-SecureString cmdlet method, Samantha proposed starting the first sprint using self-signed certificates. There are also other methods of cloning certificates, such as using keytool. This utility however, is a command line tool that must be downloaded and would not integrate natively into a PowerShell solution.

    A decision must ultimately be made by the customers security governance team whether using self-signed certificates is an acceptable risk. A PKI enterprise certificate server solution would normally be warranted if Fabrikam needed a wide scale distribution of cient certificates for general user or client authentication, but in this case, there will be no more than 3 administrators on the Fabrikam team using the script, so Fabrikam has decided that self-signed certificates are sufficient for this scope after-all. Samantha would still like her DevSecOps team to explore a solution (later) which integrates Active Directory Certificate Services as a Certificate Authority for a PKI based implementation of this project, so that the team can provide this as an option to other clients in the future. For now, they will proceed as planned with self-signed certificates.

    She also noted that if a ConvertFrom-SecureString implementation was employed, this would restrict Fabrikam to using only a single priviledged account on a single host per protected service account. With the proposed solution using certificates, Adatum can allow their customers to achieve greater flexibility for more account-host combination scenarios. We’ll will explore the details of the 1) Single Account, Single Host (SASH) combination later, but for now, here are some basic images that represent all scenarios at a glance.

    Figure 1 Single Account, Single Host (SASH)

    Figure 2 Single Account, Multiple Hosts (SAMH)

    Figure 3 Multiple Accounts, Single Host (MASH)

    Figure 4 Multiple Accounts, Multiple Host (MAMH)

    Finally, this implmentation should also allow administrators to create multiple credential sets for multiple service accounts, where each service account will have it’s own folder of artifacts. These artifacts will include a username, password and certificate files, stored and secured using access control entries on a central file server.

    On Tuesday, Samantha suggests that in order to develop a general outline of the script process for everyone on the team, they should first document that process. David, having some recent exposure to UML modeling from college, volunteers to create an activity diagram that will serve both as their guide to develop the solution and will also be included in the customer documentation for this project when completed.


    Figure 5 Set-SelfSignedCertCreds.ps1 Script Sequence of Activities

     

    DETAILS

    David finished the diagram on Tuesdsay, and on Wednesday morning, he decides that to better explain the process, he will also need to lists the details of the step.


    IMPORTANT: The following steps are associated with running this script initially to create the certificate and encrypted account credentials with the -ExportCert parameter.

    Set-SelfSignedCertCreds.ps1 Script Sequence of Activities Details: -ExportCert:$true

    Step 1: When you run this script initially, it must be executed using the -ExportCert parameter. This is because a self-signed certificate is required to encrypt a specified service account password. The service account name value used with the -svcAccountName parameter and the encrypted password, along with the certificate, are all exported to the directory path entered for the -netDirectory parameter. The -logDirectory parameter is used for indicating the path that will be used for the transcript file. To execute the Set-SelfSignedCertCreds.ps1 script for the first time, use the following parameter set as shown in Example 1 of the Get-Help -Name .Set-SelfSignedCertCreds.ps1 -ShowWindow results which is also shown here for convenience:

    EXAMPLE 1

    [WITH the -ExportCert switch parameter]

    .Set-SelfSignedCertCreds.ps1 -netDirectory “\<server><share><directory>” -logDirectory “\<server><share>logs” -svcAccountName <svcAccountName> -ExportCert -Verbose

    In this example, a new self-signed certificate will be created and installed. The service account password for the service account name specified will be encrypted and exported to a file share, along with the username.

    The certificate will also be exported from the current machine. The verbose switch is added to show details of certain operations.

    NOTE: If you are using VSCode to run this script, use this expression to dot source the script so that the variables will be available in your session after the script executes.

    . .Set-SelfSignedCertCreds.ps1 -netDirectory “\<server><share><directory>” -logDirectory “\<server><share>logs” -svcAccountName <svcAccountName> -Verbose

    IMPORTANT: At this point, service credentials can’t be requested yet for decryption unless the script is run again without the -ExportCert parameter. The following steps assumes that the -ExportCert parameter switch was specified with the intent to create the certificate to encrypt and export the service account password, username and the encryption/decryption certificate.

    Step 2b: Since we are assuming that the -ExportCert switch was specified, the self-signed certificate will now be created and installed into the Cert:CurrentUserMy certificate store of the currently logged on user, which is Dave Delta using the alias: usr.g1.s1@dev.adatum.com. When the certificate is created, a password for the private key must also be specified. The certificate will have the following properties to support secure encryption and decryption of the service account password:

    # Create parameters for document encryption certificate

    $SelfSignedCertParams =

    @{

    KeyDescription = “PowerShell Script Encryption-Decryption Key”

    Provider = “Microsoft Enhanced RSA and AES Cryptographic Provider”

    KeyFriendlyName = “PSScriptEncryptDecryptKey”

    FriendlyName = “$svcAccountName-PSScriptCipherCert”

    Subject = “$svcAccountName-PSScriptCipherCert”

    KeyUsage = “DataEncipherment”

    Type = “DocumentEncryptionCert”

    HashAlgorithm = “sha256”

    CertStoreLocation = “Cert:CurrentUserMy”

    } # end params

    Step 4: A prompt appears for the service account password. As you enter this password, the character values will be obscured from view.

    Step 5: The service account password is encrypted using the public key of the certificate.

    Step 6: To ensure that the encrypted password can be centrally accessible for other administrators in the domain and protected with file system and share permissions, as well as backed up, it is then written to a file and exported to a file server share path that was previously specified by the -netDirectory parameter as the artifacts location.

    NOTE: A sub directory will be created with the common name of the service account so that if multiple service account credentials are encrypted and managed, each service account will have its own unique subfolder. This subfolder will contain the service account password, username and certificate file.

    Step 7: In this step, the script will also write the service account user name to a file and export it to the -netDirectory artifacts path.

    Step 8: Finally, the previously installed Self-Signed certificate will be exported to the -netDirectory artifacts path so that it can be subsequently retrieved from a central source by other administrators. These administrators can re-run the script without the -ExportCert parameter to retrieve and decrypt the service account credentials from the same or any other machines that supports the PowerShell PKI module.


    IMPORTANT: The following steps are associated with running this script with the intent to retreive and decrypt an existing encrypted service account credential set by NOT using the

    -ExportCert parameter.

    Set-SelfSignedCertCreds.ps1 Script Sequence of Activities Details: -ExportCert:$false

    Step 1: When you run this script without the -ExportCert parameter, the credentials and certificates must already exist on the file server from a previous execution of the script with the -ExportCert parameter. The -logDirectory parameter is used for specifying the path for the transcript and log files. To execute the Set-SelfSignedCertCreds.ps1 script for using an existing certificate, use the following parameter set as Shown in Examples 2 and 3 of the Get-Help -Name .Set-SelfSignedCertCreds.ps1 -ShowWindow results which is also shown here:

    EXAMPLE 2

    [WITHOUT the -ExportCert switch parameter]

    .Set-SelfSignedCertCreds.ps1 -netDirectory “\<server><share><directory>” -logDirectory “\<server><share>logs” -svcAccountName <svcAccountName> -Verbose

    This command will import the self-signed certificate associated with the service account name if required on a machine, retrieve the previously exported credentials, then use the certificate to decrypt the password component of the credential.

    NOTE: If you are using VSCode to run this script, use this expression to dot source the script so that the variables will be available in your session after the script executes.

    . .Set-SelfSignedCertCreds.ps1 -netDirectory “\<server><share><directory>” -logDirectory “\<server><share>logs” -svcAccountName <svcAccountName> -Verbose.

    EXAMPLE 3

    [WITHOUT THE -ExportCert AND WITH the -SuppressPrompts switch parameter]

    .Set-SelfSignedCertCreds.ps1 -netDirectory “\<server><share><directory>” -logDirectory “\<server><share>logs” -svcAccountName <svcAccountName> -SuppressPrompts -Verbose

    This command will import the self-signed certificate if required on a machine, retrieve the previously exported credentials associated with the service account name specified, then use the certificate to decrypt the password component of the credential. In this case, all interactive prompts will be suppressed, but transcript logging will continue.

    This switch is intended for non-interactive scenarios such as dot sourcing this script from another in order to retrieve the service account credential set for use in the main script.

    NOTE: If you are using VSCode to run this script, use this expression to dot source the script so that the variables will be available in your session after the script executes.

    . .Set-SelfSignedCertCreds.ps1 -netDirectory “\<server><share><directory>” -logDirectory “\<server><share>logs” -svcAccountName <svcAccountName> -Verbose

    Step 2a1: If example 2 in Step 1 was used where the -SuppressPrompt parameter was NOT used, the following list of tasks will be presented to the administrator, along with a prompt to continue or terminate the script.

    The following actions will be performed:

    1. Import the certificate $($SelfSignedCertParams.Subject) if not already imported and installed to $($SelfSignedCertParams.CertStoreLocation) for the current user account.”

    2. Retrieve the username and password credential set for the service account that will be used to execute scripts.

    3. Decrypt the password for the service account using the imported certificate.

    4. Construct a credential set based on the retrieved service account username and the decrypted service account password.

    PLEASE PRESS [ENTER] TO CONTINUE OR [CTRL-C] TO QUIT.

    Step 2a2: The *.pfx certificate will be imported from the file server into the current user certificate store at Cert:/CurrentUser/My

    Step 9: The service account username will be retrieved from the file server.

    Step 10: The service account password is retrieved from the file server and decrypted.

    Step 11: Next, a credential object will be constructed from both the retrieved service account username and the decrypted password.

    Step 12: Finally, the credential set will be displayed to show that its components were successfully retrieved from the file server and constructed properly. This credential object can now be used for executing commands or scripts which require this service identity for authentication. This is what the credential set will look like when displayed in the console:

    UserName : svc.ps.script@dev.adatum.com

    Password : System.Security.SecureString

    Now that we have a basic idea of the process flow, let’s look at the topology for this implmentation.

    On Wednesday afternoon, Samantha reviews the activity diagram and the detailed list, but would also like to have a physical view of the process, both to reinforce the teams own understanding and also to include in the project Wiki for the customer. Oliver, having the most operations experience, will create an infrastructure diagram on Thursday that will coincide with the steps already outlined from David’s list above. It’s late morning now on Thursday and as Oliver is completing this task, he decides to embelish the diagram with a few logical elements for the -ExportCert and -SuppressPrompts switched parameters.

    Figure 6 Set-SelfSignedCertCreds.ps1 Infrastructure View

    By Thursday evening, Samantha is delighted at the combined efforts of the team so far to produce all this cool documentation, and during the daily scrum meeting the team reviews the results and of a test plan that David developed and executed earlier that day. Here is the use case image for the test plan.


    Figure 7 Integration Test Plan

    Testing Interactive Authentication

    # Test parameters

    # TASK-ITEM: Update these parameters with your own custom values for your environment.

    $remoteTestMachine = “<remoteTestMachine>”

    $scriptPath = “<scriptPath>”

    $scriptContent = “Get-ChildItem -Path ‘c:'”

    # Test case 1.0: To test a command interactively, use the following expression:

    # tc1.1 Interactive command test

    Invoke-Command -Computername $remoteTestMachine `

    -ScriptBlock { Get-Childitem -Path “c:” } -Credential $svcAccountCred

    A successful result should look similar to the following output:


    Figure 8 Interactive Directory Listing Test

    To test scheduling a scheduled job, use the following code snippet.

    # Test case 2.0: Register scheduled job using a script file, which contains the code: Get-ChildItem -Path “c:”

    # tc2.1 Register the job using the script file

    Register-ScheduledJob -Name psjob1 -FilePath $scriptPath -Credential $svcAccountCred

    # tc2.2 Create a trigger for 10 seconds from now

    $trigger1 = New-JobTrigger -At (Get-Date).AddSeconds(10) -Once -Verbose

    # t2.3 Add the trigger to the job

    Add-JobTrigger -Name psjob1 -Trigger $trigger1 -Verbose

    # t2.4 After 20 seconds, get the job information.

    Start-Sleep -seconds 20 -Verbose

    Get-ScheduledJob -Name psjob1 -Verbose

    # t2.5 Retrieve the results

    Receive-Job -Name psjob1 -Keep -Verbose

    # t2.6 The scheduled jobs will appear at in the Task Scheduler at the path: MicrosoftWindowsPowerShellScheduledJobs

    # t2.7 Remove the job

    Get-ScheduledJob -Name psjob1 | Unregister-ScheduledJob -Verbose

    The console view of the commands and results should resemble this:


    Figure 9 Scheduled Directory Listing from Script File Test

    For the final test case, we’ll scheduled another job but use a script block instead of a script this time.

    # Test case 3.0: Register scheduled job using a script block

    # t3.1 Register scheduled job

    Register-ScheduledJob -Name psjob2 `

    -ScriptBlock { Get-ChildItem -Path “\azrads1003.dev.adatum.comc$” } `

    -Credential $svcAccountCred -Verbose

    # t3.2 Create a trigger for 10 seconds from now

    $trigger = New-JobTrigger -At (Get-Date).AddSeconds(10) -Once -Verbose

    # t3.3 Add the trigger to the job

    Add-JobTrigger -Name psjob2 -Trigger $trigger -Verbose

    # t3.4 After 20 seconds, get the job information.

    Start-Sleep -seconds 20 -Verbose

    Get-ScheduledJob -Name psjob2 -Verbose

    # t3.5 Retrieve the results

    Receive-Job -Name psjob2 -Keep -Verbose

    # t3.6 The scheduled jobs will appear at in the Task Scheduler at the

    # path: MicrosoftWindowsPowerShellScheduledJobs

    # t3.6 Remove the job

    Get-ScheduledJob -Name psjob2 | Unregister-ScheduledJob -Verbose

    Executing this snippet produces the result shown here:


    Figure 10 Scheduled Directory Listing from Script Block Test

    On Friday morning, the team meets with the Fabrikam customers for a sprint review to demonstrate the solution. Although David knows the most intricate details of the project, he can’t present it because he’s fictitious, remember? So I’ll just have to show the demo on his behalf: (This is the best part if you’ve made it this far and would just prefer to watch a video 😉)

    Demo: See the video of the demo here.

    Wrapping Up

    With this solution, you get the convenience of avoiding manual password entries, the security of encrypted credentials and the capability of expanding this to multiple accounts across multiple systems.

    Although you can use the ConvertFrom-SecureString cmdlet, PKI based certificates, external command line utilities or even certain cloud based services, these approaches may not always suit your specific needs. Using document encryption self-signed certificates may be more appropriate to reduce or eliminate dependencies on the PKI team if it’s separate from Dev, Sec, and or Ops, or to quickly prototype solutions in Dev/Test environments as proofs of concepts before expanding the scale to a production based PKI implementation. You can even use document encryption certificates for Desired State Configuration deployments.

    If your security requirements include reducing administrative effort while satisfying compliance and maintaining an audit trail, then the method described in this post may be just what you were looking for. Besides, having this knowledge may help train new techies when they onboard, or even just the intrinsic satisfaction that comes from being able to fully report or explain it to management 😊

    Where’s the Source Code?

        The source code can be viewed at: http://bit.ly/2QagVOq and downloaded from: http://bit.ly/2QX75Vt

    ACKNOWLEDGEMENTS

    I’d like to extend a personal thanks and appreciation to the contributors of all the resources listed below, which helped me to sufficiently research the topic for this post. I’d like to also give a quick shout-out to one of my colleagues Bill Grauer, who gave me the idea and an initial script to implement a similar requirement for one of our customers. Thanks also to Dale Vincent for his technical review and Brandon Wilson for formatting suggestions.

    REFERENCES

    1. Using Self-Signed Certificates to Encrypt Text
    2. Using a Certificate to Encrypt Credentials…
    3. Using a Certificate to Encrypt Credentials… (Update)
    4. ISO Standard 27005, Edition 3:v1:en
    5. ISO-IEC 27001 Information Security
    6. PowerShell Code to Store User Credentials Encrypted for Re-use
    7. Encrypt and Store your Passwords and use them for remote…
    8. ConvertFrom-SecureString
    9. Data Protection API
    10. Searching for File attributes
    11. Function to Create Certificate Template in ADCS…
    12. Clone Certificate with New Identity (keytool)
    13. Keytool, Oracle Java SE Documentation
    Viewing all 5932 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>